Technology

Challenged

 

&

 

 

Understanding Our Creations

 

Choosing Our Future

 

 

 

Miguel F. Aznar

 

 

 

KnowledgeContext

Santa Cruz, California


 

 

 

A portion of the profits from sale of this book
supports the philanthropic activities of KnowledgeContext,
a 501(c)(3) educational nonprofit corporation.

 

 

 

 

Published by KnowledgeContext

800 Brommer Street, Suite 28, Santa Cruz, CA  95062

www.KnowledgeContext.org

 

Copyright © 2005 by Miguel F. Aznar

 

All rights reserved
including the right of reproduction
in whole or in part in any form.

 

Printed in the United States of America
on recycled paper

 

 

First KnowledgeContext edition, 2005

 

Aznar, M. F. (Miguel Flach), 1964 –

Technology Challenged: Understanding Our Creations
& Choosing Our Future / Miguel F. Aznar

 

Includes bibliographical references and index

 

 

1.Technology

2.Technology–Social Aspects

3.Technology–History

4.Technology–Risk Assessment

 

Library of Congress Control Number:  2004099558

 

ISBN-10:  0-9763858-0-5

 


 

Contents

 

Introduction. 7

 

Overview.. 11

 

Chapter 1: What is Technology?. 15

Tools that extend our ability. 16

Systems:  The Intangible Levers. 18

Information:  The invisible ingredient 20

Not applied science. 22

 

Chapter 2: Why do We Use Technology?. 25

Communication. 27

Health. 28

Entertainment 30

Organization. 32

 

Chapter 3: Where Does Technology Come From?. 36

Other Technology. 39

Dense populations. 41

Specialization. 44

Plan or Accident 46

Protection. 48

 

Chapter 4: How does Technology Work?. 52

Energy:  the muscle behind technology. 53

Organization Part 1: Centralized vs. Distributed. 58

Control: like riding a bicycle. 61

Information: algorithms. 63

Organization Part 2: Repetition & Layers. 66

Emergent behavior 70

 

Chapter 5: How does Technology Change?. 75

Disappearing Technology. 77

Necessity’s Mother & Daughter 79

Advantage, Compatibility, Risk, Visibility. 80

Autocatalysis. 85

Evolution & Memes. 88

 

 

 

 

 

Chapter 6: How does Technology Change Us?. 94

Methuselah’s Burden. 95

Working up the Pyramid. 102

Our Perception of Reality. 105

The Human Species. 108

 

Chapter 7: How Do We Change Technology?. 112

Engineer 113

Govern. 114

Promote. 116

Manage. 119

Invest 120

Question. 121

 

Chapter 8: What are Technology’s Costs and Benefits?. 126

Enabler vs. Crutch. 127

Complexity vs. Predictability. 130

Catastrophic vs. Chronic. 135

Control vs. Freedom.. 137

Progress vs. Obsolescence. 138

 

Chapter 9: How Do We Evaluate Technology?. 141

Survival 145

Ritual 146

Power 149

Authority. 150

Economic. 153

Ecologic. 155

 

The Tao of Technology. 164

 

Going Beyond this Book. 168

 

Index. 176

 

Acknowledgements / About KnowledgeContext / About the Author.... 190

 



Introduction

 

 

 

 

 

 

 

It might be a familiar progression, transpiring on many worlds—
a planet, newly formed, placidly revolves around its star;
life slowly forms; a kaleidoscopic procession of creatures evolves;
intelligence emerges which, at least up to a point,
confers enormous survival value; and then technology
is invented…In a flash, they create world-altering contrivances.
Some planetary civilizations see their way through,
place limits on what may and what must not be done,
and safely pass through the time of perils.
Others, not so lucky or so prudent, perish.

— Carl Sagan

 

The time of perils has already begun.  This book offers a tool for navigating them.  It is a personal tool, one we might use in our everyday choices.  And it is a global tool, one that could help our civilization survive.

Just how did we get into this situation?  Expanding on Carl Sagan’s description: For most of the earth’s existence it has hosted organic life.  Single cell life evolved into multicellular plants and animals.  Some animals started to use simple tools—from otters cracking open shellfish with rocks to chimpanzees dipping into termite mounds with sticks—but another animal went farther.  It used tools to create even better tools.  With spears it hunted.  With sewn animal hides it survived the cold.  With plows it created surplus.  With tablet and scribes it recorded information.  These tools led to the printing press, microscope, steam engine, telephone, airplane, and computer.  We call these things technology, and for a million years they have been transforming our environment…with ever increasing power, costs, and benefits.

World War II saw the creation of one of our most potent contrivances so far: a weapon based on a nuclear chain reaction.  Not as well known as the atomic bomb was a possible side effect of its detonation: a second chain reaction that might incinerate the earth’s entire atmosphere.  While the nuclear chain reaction was limited by the amount of radioactive fuel contained in the bomb, the atmospheric chain reaction would be limited only by the amount of oxygen cloaking the earth.

What to do?  Scientists building the first atomic bomb estimated the risk to be as high as three chances in a million.  They weighed the costs and benefits…to themselves, their country, and their planet.  They considered the objective calculations and their subjective values before proceeding with the detonation.

The atmosphere did burn, but only in proximity to the explosion.  There was no atmospheric chain reaction.  Had there been, we would not be here to write about it.  The atomic bomb is hardly alone in leveraging human power to a perilous height:

·         One of the oldest tools, knives in the new form of box cutters, were used to hijack airplanes, which were used as suicide weapons in the U.S. on September 11, 2001.  Technology gives individuals breathtaking power.

·         The Institute for Biological Energy Alternatives synthesized a completely new virus in just two weeks.  Unlike a bomb, a virus does not explode once, but can multiply and spread.  Severe Acute Respiratory Syndrome (SARS) circled the globe in a matter of days.

·         SARS was neither highly infective nor highly deadly, but Vector, the USSR’s secret bioweapons laboratory, genetically modified diseases to be both.  One product was a smallpox virus designed to be resistant to all known treatment.  Scientists also worked toward viruses and bacteria that would degrade the human immune system or modify behavior.

·         Electric probes implanted in the brains of rats have demonstrated rudimentary “mind control,” foreshadowing the day when entertainment may become as immersive as video games and as addictive as brain-chemistry-altering drugs.  What effect might “digital methamphetamine” have on society?

·         Nanotechnology, the technique of creating objects on the molecular scale, would become more efficient if it could create microscopic robots—“nanobots”—that could, in turn, create more nanobots, which would create still more nanobots.  Factories would then have workers that could also make more workers.  But as Mickey Mouse discovered in The Sorcerer’s Apprentice when his magic broom replicated itself without limit, a self-reproducing tool can quickly escape our control.

There are good reasons for almost any technology.  Even though they can be used as weapons and can transport disease, airplanes provide tremendous benefits.  Those developing a drug-resistant strain of smallpox must have believed that presented more benefit than cost, at least for them.  The question is not, “Should we have technology?”  Obviously, we have it and—barring catastrophe—we will have much more of it.  The question is, “How do we evaluate it?”  Unfortunately, the most compelling argument for a considered, critical approach would be a spectacular disaster…and that could exterminate us.

 

We live in an increasingly interdependent world and,
due to the progress of technology, our power over nature
has increased by leaps and bounds.  Unless we use that
power wisely, we are in danger of damaging or destroying
both our environment
 and our civilization.

— George Soros

 

Evaluating technology is not just about saving the world, but is part of our everyday lives.  Our education, work, health, and recreation choices pivot on technology.  What kind of car is best for you?  Which software should you buy?  Should you take the new drug your doctor prescribed?  Do you write your senator to support or oppose a missile defense shield or cloning?  What jobs will technology move offshore…before it renders them obsolete?  Billions of people making a thousand billion choices, aggregating—like raindrops building to a flood—to transform the earth.  We did not get from stone tools to genetic engineering without choices.

Key to our individual and collective future is how we make those choices.  But if we are to understand and evaluate technology, we face a monumental problem: technology is complex.  There is more of it than we can fully understand and, beyond that, it is changing faster than we can keep up.  Stone tools changed little over thousands of human generations, but modern technology changes radically within just a single human generation.  Personal computers, cellular telephones, medicinal drugs, and weapons systems render themselves obsolete ever more quickly, and this trend continues to accelerate.  So the small percentage of technology that any individual can fully understand is becoming smaller every day.

In our highly specialized world, even experts rely on experts.  The auto mechanic fixes the car of the computer technician who fixes the computer of the mechanic.  We rely on expert’s reviews, take our doctor’s advice, listen to our friends, and then make gut decisions.  And, while our children exhibit an amazing facility for adapting to and using new technology, they are no better prepared to evaluate it.  Schools teach them that technology is no more than computers, and that knowing how to operate them is equivalent to understanding them.  Learning which buttons to push is no substitute for the ability to evaluate.

In an era of rapidly changing technology, studying the details of what has already been invented is like driving a car while craning out the window and staring down at the blur of asphalt.  This is a dangerous way to drive.  Learning to operate current tools is important for many occupations, but in order to plan ahead we need a grasp of the timeless patterns that have held true for stone tools, plows, computers, and genetic engineering…and may well continue to hold for future technologies.  This is not about how to design a computer or repair a car, but about discovering a big picture that puts these technologies—all technologies—into context.  It is technological literacy.

 

There is a major difference between
technological competence and technological literacy.
Literacy is what everyone needs.
Competence is what a few people need in order to do a job or make a living.
And we need both.

William Wulf 

 

This form of literacy changes how we perceive technology.  Do we treat it as some foreign and strange thing that “experts” create and direct us to use?  Or, quite the opposite, do we create a relationship with technology, putting it within our understanding and influence?  Even a computer engineer or auto mechanic may sometimes take the first view, seeing the other as an expert whose technological domain is completely foreign.  Technologically literate people take the second, more powerful view.

This “big picture” contextual view of technology is precisely what we need to navigate these perilous times—both on a personal and a societal level.  To gain that view, we need to figure out what is true for many technologies, even those not yet invented.  It is in our nature to seek out the patterns around us—all a part of finding our place in the Universe.  We look for patterns in technology much the same way we would look for them in anything…by asking the right questions.


 

Overview

 

 

Being technologically literate is knowing what questions to ask.

— Ira Flatow

 

To understand and evaluate technology, we ask nine questions (each the topic of a chapter) and seek enduring answers.  We start by building a foundation:  In chapters one through four, we examine technology’s identity.  Once we identify technology, we analyze how it changes in chapters five, six, and seven.  Together, these seven chapters give us a foundation for its evaluation, which we do in chapters eight and nine.

1.       What is technology?  Since we are looking for patterns that have long been—and will continue to be—true, we cannot define technology as just the latest inventions.  We need a much broader definition.  In Chapter One, we try several, including tools that extend our abilities.  A definition that includes history—even all the way back to stone tools—may endure into our future.

2.       Why do we use technology?  In Chapter Two, we uncover a few answers that apply to most of what we have invented.  Our desire to communicate, for instance, has been satisfied by writing, paper, printing press, pencil, radio, telephone, television, email, and “instant messaging.”

3.       Where does technology come from?  In Chapter Three we look at environments conducive to the birth of technology.  One pattern we find is that denser populations gave us more chances to encounter and build upon each other’s inventions, speeding up progress.  This might help us understand why technology changed slowly for thousands of years but rapidly now.

4.       How does technology work?  All humans use technology, much of it so simple—a hammer or shovel, for instance—that we do not even think about this question.  Perhaps surprisingly, there are simple patterns that hold for a variety of technologies.  One pattern we uncover in Chapter Four, feedback and correction, explains how computerized thermostats, robots, and other complex technologies work.

 

The four questions comprising the first four chapters focus on the “identity” of technology, but “change” is what makes technology a pressing issue, and we address it in the next three chapters: 

5.       How does technology change?  While it is common knowledge that computers double in power every couple of years, few know that this exponential growth curve started back when computers were made of electromagnetic relays and vacuum tubes...or that mechanical clocks followed a similar curve of improvement beginning in the 1300s.

6.       How does technology change us?  It is not just the technology that changes.  We change in response to it, just as any living thing adapts to its environment.  In Chapter Six we find that technology has affected how we work, live, and perceive our world.  Machines have displaced workers and created new jobs, pushing us up a pyramid of work, which often requires more thinking and less brute strength.

7.       How do we change technology?  Just as technology changes us, we change it.  While Chapter Five looks at patterns of change intrinsic to technology, Chapter Seven looks at the ways that people influence it.  As inventors, managers, investors, leaders, teachers, and in many other capacities, our choices and decisions guide technology.  In a sense, humans form much of the environment in which technology either survives or becomes extinct.

8.       What are technology’s costs and benefits?  In Chapter Eight, we search for patterns in the tradeoffs we make with technology.  One tradeoff we examine: the more a technology enables us, the more we become dependent upon it.  This was uncomfortably clear as we approached January 1, 2000 and worried about the Y2K bug, which had the potential to cause many millions of computers to malfunction.  Computers are so useful that we have come to depend on them to schedule our factories, operate antilock brakes on our cars, and keep track of our bank balances.

9.       How do we evaluate technology?  In Chapter Nine, the second of our two chapters on evaluation, we draw on sociology and psychology.  Countries, corporations, religions, clubs, families, and individuals bring their own values to bear when evaluating costs and benefits.  For example, the values of Afghanistan’s Taliban regime labeled Stinger missiles “good technology” and TV satellite dishes “bad technology.”

These nine questions fit into the categories Identity, Change, and Evaluation (from which we get the acronym “ICE-9”) as shown in this diagram:


 


The ICE-9 questions are a honeycomb or structure of cubbyholes into which we can place new things we learn about technology.  Asking those questions about technologies we encounter (directly or through TV, radio, newspapers, magazines, or books), we find patterns that hold true for many technologies.  And these patterns form a context.

 

How might we apply these questions to a technology we return to later in this book, radios in North Korea?  First, the background: any radios that can be tuned to frequencies other than the one carrying official broadcasts must by registered with the government.  The tuners are soldered into place and police make surprise inspections, looking for tampering.  Information is so tightly controlled that defectors are surprised to find that South Korea is more prosperous than North (which has had widespread starvation) and that U.S. donations of rice are not subservient gifts of tribute.  Combating this dearth of information, a group in South Korea is smuggling in disposable radios.  With ICE-9:

9. How do we evaluate it?  The government of North Korea evaluates the radios in terms of their power.  By promoting dissenting views, this technology is a threat to their control.

8. What are its costs and benefits?  Like many technologies, radio offers tradeoffs between such goals as control and freedom.  In this situation, radios subvert control and promote freedom.

7. How do we change it?  Engineers design radios, activists distribute them, organizations fund them, and North Korean police hunt them.

6. How does it change us?  Independent news sources heard over the radios change listeners’ conception of reality: they discover that starvation is not normal and that their nation is not the world’s most powerful.

5. How does it change?  Electronic technologies, in particular, have become smaller and less expensive at an amazing rate, making disposable radios feasible.

4. How does it work?  Many technologies can be characterized as either centralized or distributed.  Unlike a large transmitter, the radios are highly distributed, so many could fail or be destroyed without affecting the rest.

3. Where does it come from?  These radios come from specialization, designed by experts in microelectronics.  Broadcasting, however, was an accident: radio was invented a century ago for one-to-one conversations where telephone wires could not be run.

2. Why do we use it?  Communication is one of the oldest reasons we use technology and it still drives such devices as radios, satellites, cellular phones, and email.

1. What is it?  Radio is a tool to extend our abilities, allowing us to hear something from far away.  But the physical radio that we can touch is just the tip of the iceberg.  Out of sight are systems of technical standards and networks of energy distribution and manufacturing just as important.

 

Technology takes on greater meaning when we understand its context.  Oblivious of that context, many are satisfied to simply use technology, ignoring their relationships to it and its relationship to our environment.  Dams also illustrate this point.

Looking down on the earth from space, some of the largest, most visible technologies are hydroelectric dams.  Invisible from that high perch, however, is how interconnected the whole system is.  Salmon feed in the oceans, enriching their bodies before returning to their spawning grounds.  Unless dams block them.  Salmon farms provide a spawning area below the dams, addressing the dwindling salmon population.  But not a related problem: before creation of dams and farms, upstream bears, eagles, bobcats and many other animals ate salmon, and then fertilized inland trees with phosphorous and nitrogen from the ocean.  Trees evolved over eons to thrive on that fertilizer, one of countless relationships now affected by technology.

But change is nothing new.  Primitive tools changed how humans hunted, sheltered, and clashed.  They changed the environment in which we evolve so, naturally, different traits became the most survivable—for instance, our ability to create and use tools.  Medical technology, including antibiotics, has changed the environment in which viral diseases are competing to survive, helping to evolve antibiotic-resistant viruses.  It has also extended human life, giving us time to philosophize or invent yet more technology.

 

…there could be a crucial hurdle
at our own present evolutionary stage,
the stage when intelligent life
starts to develop technology.

— Martin Rees

 

That we will change salmon, trees, viruses, and ourselves is inevitable.  And, as technology advances, we will have greater power to cause change.  The open question is whether we will effect those changes with a myopic view of the technology and its most immediate application, or with a view of the grander patterns.

We opened the Introduction with a quote from Carl Sagan cautioning us about the power and danger of our “world-altering contrivances.”  The danger comes from blindly embracing or rejecting technology—rather than influencing our world based on understanding and evaluation.  Creating an intentional future is a collective process, and it is our hope that you, and those you pass this book along to, will find this approach useful.


 

Chapter 1

What is Technology?

 

 

Technology is a gift of God.
After the gift of life it is perhaps
the greatest of God's gifts.
It is the mother of civilizations,
of arts and of sciences.

— Freeman Dyson

 

The Hawaiian bobtail squid would be easy prey on bright moonlit nights if it cast a shadow.  But it does not.  Instead, the squid projects simulated moonlight on the ocean floor where predators wait.  How does a squid extend its abilities to include shining like the moon?  It gathers and eats bacteria called Vibrio fischeri.  These communicate among themselves with chemical signaling molecules so they know how many of their peers have gathered, and when their population hits a critical density, or quorum, they glow.  The squid packs these glowing bacteria into an organ with shutters, lenses, and colored filters so that it can simulate a wide range of moonlight, keeping the squid virtually invisible to predators.  Does ingesting and using luminous bacteria qualify as using technology?

Vibrio fischeri have cousins named Vibrio cholerae, the water-borne bacteria that cause cholera.  While the Hawaiian bobtail squid shines light with the help of Vibrio fischeri, the Vibrio cholerae bacteria actually change their environment.  They enter the human stomach when infected water is consumed.  At first it might appear that they are doomed for, unlike the benign bacteria found in healthy stomachs, Vibrio cholerae are killed by human digestive acids.  Only one in a million survives.  The survivors attach themselves firmly to folds in the lining of the small intestine and then inject a bit of toxin.  The stomach’s reaction to this threatened tissue damage is to flush the area with water, diluting the acid, washing away the other bacteria, and leaving the invader still clinging tightly.  The Vibrio cholera procreate and, evolved to avoid putting all its eggs in one stomach, some ride the newly created river—diarrhea—in search of new hosts.  All this flushing water dehydrates the human host and, untreated, cholera can result in death within hours.  Is Vibrio cholerae acting as a technology because it changes its environment?

Is a sea otter smashing shellfish with rocks using technology?  A chimpanzee smashing open nuts with rocks?  A crow dipping for insects with sticks?  Or a beaver damming streams to form ponds?  Does instinctual use count?  Or is being able to manipulate and share information about their tools—and being aware of these processes—necessary?  It all depends on how we define technology.

The root meaning of technology, from Greek, is the study of a craft or art.  John Lienhard, radio host and professor of both engineering and history, suggests that our species should not be called homo sapiens (the wise ones), but homo technologicus (those who use technology).  He defined technology as “the knowledge of making things.”  In his book The Technological Society, Jacques Ellul defined technology in relation to art and science:

 

Art is concrete & subjective

 

Science is abstract & objective

 

Technology is concrete & objective

 

In this chapter, we explore several slightly more specific and practical definitions   First we consider “any tool that extends our abilities,” seeing how levers, pole vaults, and the Space Shuttle fit.  Then we follow a story from one kind of rock that became important more than 2500 years ago to another kind of rock that has completely transformed our world in the past half century.  Those two rocks and several technologies in between extended our ability to conduct commerce, which illustrates our second definition: “systems of tools.”  Homer’s Iliad and the phenomenon of software piracy bring us to a third definition of “information as technology.”  There is no universally accepted and timeless definition, so we test our proposals against a variety of inventions and developments to see if they seem to make sense.  In the last section we show why “applied science,” although found in some dictionaries, comes up short for our purposes.

Definitions of technology help us to decide where to look for patterns.  Too broad a scope may have few or no patterns that span it.  Too narrow a scope hides patterns.  Something true for televisions alone, for instance, is not nearly as valuable as a pattern common to prehistoric implements, agricultural devices, industrial machines, computer equipment, genetic tools, and even less tangible things, such as monetary systems.  We want a tool for understanding and evaluating the technology of the future, so we look at technology of today and yesterday to get a feel for just what technology is.

 

Tools that extend our ability

How high can you jump?  The Olympic record for the high jump is about eight feet.  If you allow a simple technology like a pole, the record vaults to nearly 20 feet.  Suppose you took a very large, hollow pole, and fill it with rocket fuel, add control systems, and provide a pressurized control module on top.  Then, the record increases to nearly 300,000 miles with a trip around the moon.

Of course looping around the moon is not an Olympic event, but it does show that technology, perhaps by definition, extends our abilities.  Testing out this definition, we will range from the first lever all the way to the bicycle and the Space Shuttle.

Unlike the moon rocket, the pole vault is simply a lever.  Levers were in use long before Archimedes described them in 260 BCE, but he gets credit because his is the earliest known description.  Long, long ago after a storm knocked down trees, one of our ancestors may have climbed atop one of them.  With a lucky arrangement of trees, that stunned person would have lifted a massive tree off the ground.

 

The right arrangement involves three trees:  lever, fulcrum, and load.  The load tree lies atop one end of the lever tree, which lies across the fulcrum tree and extends up into the air.  If the lever extends far enough from the fulcrum, the small force of the person’s weight will lift the much greater weight of the load.  Another reason Archimedes gets credit for the lever might be his memorable proclamation: “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world.”  Long before television, he understood sound bites.

If the fulcrum is near your end of the lever, you find it hard to push the lever, but the other end moves much farther than does yours.  Consider the garden rake left lying so that you step on the tines, propelling the long wooden handle towards your forehead at speed far greater than the descent of your foot.

If the fulcrum is near the far end of the lever, you find it easy to push the lever, but the other end moves much less than does yours.  Consider a hammer turned around to pull nails out of a board.  The claw end of the hammer moves an inch or two with great force while the wooden handle that you grasp moves six inches or more with less force.  Depending on where you place the fulcrum, you trade force for distance or distance for force.

We refined trading force for distance with the bicycle.  Equipped with multiple gears, it allows you to crawl up a steep slope or speed on a level surface.  The bicycle incorporates levers into the crank arms, which connect the pedals to an axle.  Chain and gears connect this to the rear wheel.  The bicycle is a simple, yet highly efficient, technology that extends our ability to move.  With a bicycle, a person can cover 200 miles in less than half a day, or ride across the U.S. in less than 10 days.  Well, not a typical person, but some do take their recreation to these extremes.

A couple of bicycle mechanics named Orville and Wilbur Wright combined the ancient technologies of lever and wheel with a newer one, the airfoil, to make a practical airplane.  With some significant upgrades (such as rocket propellant), the airplane became the Space Shuttle attached to the modified pole we described at the beginning of this section.

Tools that extend our abilities is a broad a definition, and would include television, which extends our ability of seeing far distances, and the DVD player, which extends our ability to see to times past.  This definition can also include destructive ends.  On September 11, 2001, a few violent people used a very old technology (knives…well, technically, box cutters), to take control of another technology (four large jet airplanes).  The airplanes, with their load of refined aviation fuel, caused far more damage than a band of prehistoric terrorists wielding knives could have.  Even a drunk driver, erratically maneuvering a ton of steel and glass, has his or her ability to do harm greatly extended.

If we think of a tool as an isolated object—such as an airplane, car, or television—then we are still missing something essential.  Take almost any technology away from its infrastructure and it will fail.  A car transported back in time 500 years would have few, if any, suitable roads, no source of gasoline, no source of replacement parts, and nobody would know how to operate or repair it.  This has been a stumbling block in developing and deploying hydrogen fueled cars:  no infrastructure of refueling stations.  Similarly, a television in the 16th century would be useless.  So technology must be more than individual objects; it must also include systems.

 

 

In addition to tools and devices, we should include systems and methods of organization...Any collection of processes that together make up a new way to magnify our power or facilitate the performance of some task can be understood as a technology.

Al Gore

 

Systems:  The Intangible Levers

A very special stone was discovered in the kingdom of Lydia  (now Turkey) in 550 BC.  A naturally occurring mineral containing silicon, this stone, called a touchstone, could reveal the purity of gold.  As a result a new technology was invented.  The “new technology” was not the stone itself, but the knowledge surrounding its use:

 

·         Rubbing pure gold against a touchstone made a yellow mark.

·         Rubbing gold diluted with silver made a white mark.

·         Rubbing gold diluted with copper made a red mark. 

 

This tool made possible a system of money by extending our ability to ascertain quality.  The government minted gold coins imprinted with their guarantee of value…which could be tested by weighing the coin and rubbing it against a touchstone.  Money—when trusted—extends our ability to trade.  Think of it as an intangible lever.

Can you imagine trading without money?  Suppose you wanted to trade goats for corn, but the person who had the corn you wanted was uninterested in goats?  You would have to find someone who did want goats and would trade something of interest to the corn seller.  Money could buy anything for sale, and, unlike goats, money did not become sick or die on your way to trade.  While you could say that the coin, itself, is the technology that extends our ability to conduct commerce, the coin is just one component of a system—one that includes the government guarantee, touchstones, moneychangers, and knowledge among those who would accept it as payment.

That system became more sophisticated with the check or credit note, invented in 14th century Italy.  It allowed international trade without packing along large amounts of money.  Being at the center of Mediterranean trade routes, Italy harbored the banks that issued credit notes.  Merchants purchased notes that the bank guaranteed could be exchanged for a set amount of a foreign currency in a specific city.  On the dangerous roads and sea routes that the merchants traveled, robbers were interested in goods and money, not written notes.

The importance of the system, including knowledge and trust, is much more important with credit notes than with coins, which could presumably be melted down into something of value outside the system.  The credit note and paper currency relied on information.  Who issued it?  How much is it worth?  What are the terms for its redemption?  The U.S. dollar continues to display signatures of government authorities and the assurance that it is good for all debts, public and private.

The system becomes even more sophisticated in credit card technology.  Predicted in the 19th century novel Looking Backward, but made practical in the 1950s, the credit card allowed us to trade without carrying large amounts of cash or finding someone who would trust our check.  Information from the credit card (the number) and the transaction (the amount) are transmitted to a central computer, which keeps track of credit limits, spending patterns, and stolen cards.

“Smart cards” further extend our ability to conduct commerce by carrying all that information on the card itself.  Insert a smart card into an automated teller machine and “load” it with money from your bank account.  What makes a smart card “smart” is an embedded silicon microchip that stores encrypted information about how much money you have transferred from the bank account to the card.  You can then, for example, insert your card into a soda machine, and if there are enough funds on it, you will get your drink.

Money, in the form of encrypted information, is transferred to the soda machine, leaving less on your card and preventing you from spending the same money twice.  Periodically, the information from the drink dispenser goes to a clearinghouse computer (which could be located anywhere on Earth) that credits the owner’s bank account with the amount you spent.  A credit card, by contrast, must immediately contact a central computer every time it is used, which can be slow and, for very small transactions, relatively expensive.  By coincidence, the touchstone (the first tool extending our ability to conduct commerce) and the smart card (the most recent tool to do so) both contain silicon.

Money, checks, credit cards, and smart cards are all systems that extend our abilities.  Beyond monetary systems are economic, legal, and business systems that also extend our abilities.  Public and private organizations have countless linkages determining how they interact and cooperate.  Look at a part of these systems in isolation and it will not make sense because its environment defines its behavior.  And that environment is the system.  We cannot understand technology without understanding its context.

Let’s come full circle to the touchstone.  The touchstone is not an invention—it is a naturally occurring mineral containing silicon.  Yet the system of information surrounding its use (such as how to interpret the results) could be called technology.  So, could information alone be considered technology?  If so, the invisibility of information will make it harder for us to recognize new technologies that are largely or completely comprised of it. 

This is not a new concept.  As noted at the start of this chapter, the root meaning of technology, from Greek, is the study of a craft or art.  In other words, it is the knowledge someone has of a practice, perhaps making pottery or sailing ships.  Perhaps technology, then, is not just the tool that extends our abilities, but the whole system of tool and information about the tool.

 

 

One cannot really understand [technology]
without an understanding of the
roles, incentives, skill, and behaviors
that define its use.

– L.G. Tornatzky

Information:  The invisible ingredient

Take away the information that surrounds the physical artifacts we call technology, and they don’t work.  That information specifies how to operate, manufacture, and repair, and ranges from ancient human techniques to modern computer code.  You cannot always see those instructions, but knowing what to do is a critical component of technology.

Before Homer committed the story of the Iliad to writing in the 9th or 8th century BC, it was a song that included technological information, such as techniques for launching and landing ships.  Recording information took a leap forward with writing, about 5000 years ago, and then again with the interchangeable type printing press almost 600 years ago.

In the 20th century, computers took on the role of manipulating and transmitting information.  In fact, in a circular manner, they record information about the design of their successor computers.  Engineers continue to design the next generation of technology by using the current one.  At the end of the 20th century information sharing accelerated with the Internet.

Some technology is more information than material.  For example, the cost to make one micro­processor is almost as much as the cost to make a thousand.  The material cost of the silicon in a single microprocessor is nearly zero.  The expense comes from manufacturing facilities, manufacturing setup, and research and development of the design.  So the essence of a microprocessor is in how the few square centimeters of silicon is arranged into millions of tiny transistors.   A lump of silicon is almost worthless, but a microprocessor sells for hundreds of dollars.  And the information that separates the two is worth billions (one microprocessor manufacturer, Intel, spends that amount each year on research and development).

In some technology, the material surrounding the information is nearly irrelevant.  Microsoft earns billions of dollars selling compact discs and lots of empty space in cardboard boxes.  The value of their software technology, which can sell for hundreds of dollars, is in the information represented on the discs, not in the material of the discs, which is worth pennies.  And even those pennies can be eliminated from the mix.  Many companies allow purchase of their software by downloading it over the Internet.  The buyer provides information (credit card authorization) and the company provides information (software instructions for the buyer’s computer).  No physical substance moves from the seller to the buyer.  The technology is 100% information.  And that invisible information we call “computer software” generates many billions of dollars in corporate revenues each year.

The fact that the technology can be 100% information also makes it easier to steal.  Stealing 1000 cars is much harder than stealing one.  However, making 1000 copies of pirated software is not much harder than making one.  So, while the creators of information technology enjoy the economy of distributing their information, they also suffer from it.

This pattern of technology as information has a dark side, too.  Weapons of mass destruction are sometimes classified as nuclear, chemical, or biological.  Technologies in these categories have material and informational components.  Those trying to limit proliferation of these have a much easier time controlling the material components because information moves so quickly and easily.

Years ago, publication of plans for an atomic bomb caused widespread concern.  Fortunately, the critical materials are still hard to obtain.  Nuclear technology was a huge advance in power, but if someone wished to use it—for instance, to blow up a city—that person needed refined radioactive materials.  Although the collapse of the Soviet Union left some of its nuclear facilities vulnerable and countries such as North Korea are developing their own nuclear facilities, plutonium and similarly suitable materials are still much less accessible than information.

Chemical weapons use more commonly available materials, such as agricultural fertilizer.  Publication or distribution of bomb recipes has made it easier for terrorists to create these.  Although established terrorist networks can readily share this information, now any aspiring terrorist with an Internet connection can also easily obtain it, as we saw with the Oklahoma City bombing in 1995.

Biological weapons may be of greater concern than chemical because they can reproduce on their own.  A bomb explodes once, but a plague can procreate and spread.  Information about how to culture and reproduce disease agents (e.g. smallpox) is generally available.  To avoid the danger, governments attempt to control the material component: strains of disease agents.

However, there is a more dangerous form of information concerning biological weapons technology:  genetic engineering.  In the near future, a disgruntled university student could take public information about how to modify microorganisms (e.g. viruses) and then use what will be common laboratory equipment to create a plague for which we have no protection or cure.

Information is a large and growing component of technology.  It moves easily in books, on computer discs, and over the Internet.  When it is part of a technology we consider “good,” that speed benefits us tremendously.  When it is part of a technology that threatens us, that speed undermines our control.

The trend appears to be toward information being more important than material in future technology.  For example, nanotechnology (a new technology that we describe later) promises the capability of assembling almost any physical object from cheap, microscopic raw materials (e.g. the carbon atoms polluting our air).  Companies could then sell the design for a toaster, bed, car, or almost anything.  This information would be downloaded to a matter compiler, located anywhere, which would assemble the product, virtually out of “thin air.”  Today, that is still science fiction, but unless we become aware of this pattern of technology as information, we could still be hunting around for the tangible in a future that is all about information.

 

 

Excerpt from 1949 Webster’s Dictionary.  Note definition 3.

 

Not applied science

In the 1930s, a scientist at a dinner party used the back of a napkin to calculate whether a bumblebee’s wings were large enough to lift it off the ground.  The preliminary answer was that if the wings were rigid like those of an airplane, then the bumblebee could not fly.  However, the bumblebee tilts and strokes its flexible wings quite unlike an airplane, so the scientist left the party to figure out how to take these complicating factors into account.  In his absence was born the myth that, according to science, the bumblebee cannot fly.

The myth is popular to this day because it is an apparent flaw in one of the most powerful forces of the modern world.  If someone had said that patterns in tealeaves deny the bumblebee’s ability to fly, how many friends would you pass that on to?  For centuries, science has been the world’s leading source of truth, so it should not be surprising that some, including the 1949 edition of Webster’s dictionary as shown in the box above, define technology as the application of science.

We have plenty of evidence of this application of science.  When we ride a bicycle, drive a car, or fly in an airplane, we are relying on engineers who relied on science.  Science predicts how things will work, often more quickly and economically than waiting until it is built.  For example, the Wright Brothers used a wind tunnel to experiment with designs for their airplanes.  And today equations can replace many physical experiments.  But there are two reasons this is a poor definition: (1) scientific understanding often follows the creation of a technology and, (2) when science is applied to developing technology, the process changes from science to engineering. 

One example of science trailing technology: thousands of human generations chipped at stones to create wonderfully sharp knives before the laws governing fractures of solids were uncovered.  Another example can be found in radios.  The “crystals,” vacuum tubes, and transistors that made the first three generations of radios work were accidental discoveries, not applications of scientific knowledge:

·         Early radios were called “crystal sets” because the radio wave detector was a crystalline nugget of germanium, galena, or silicon.  Getting them to work required probing the crystal with a wire until a signal came through and then keeping the wire pressed against that magic spot.  This allowed electricity to flow in only one direction (rectification), but “crystal set” radios were used for years before the rectifying properties were identified, and they were not understood in a scientific sense until after the transistor was developed.

·         The vacuum tube came from the incandescent light bulb, in which Thomas Edison had noted what he called the “Edison Effect,” but saw little use for it.  Others developed it into a rectifier and amplifier, indispensable components of radio, television, and computers.

·         The transistor came from crystal sets.  Why these minerals rectified electricity was not understood scientifically, but Bell Laboratories thought they could improve on the vacuum tube (which, like their light bulb forebears, consumed lots of energy and easily burned out).  That refinement of germanium and silicon crystals into transistors with precisely controlled amounts of impurities inspired scientific research into semiconductors, which led to integrated circuits and the boom in electronics and computers.

Science did lead the way in the discovery of germanium, if not its use in electronics.  When Dmitri Mendeleev presented his periodic table of the elements in 1871, there was a gap between silicon and tin.  His scientific approach told him that even if nobody had yet discovered it, there must be an element to fill that gap.  Calling the as-yet-undiscovered element “eka-silicon,” Mendeleev accurately predicted its weight and properties well before 1886, when it was discovered in Germany and officially named “germanium.”

A more recent example: in 2001 Bell Labs created transistors so small that each used just a single molecule, so 10 million would fit on the head of a pin.  The director of quantum-science research at Hewlett-Packard, Stan Williams, remarked, “They don’t have a clue how or why this works and I don’t have a clue how or why it works either.”  IBM’s director of physical sciences research, Thomas Theis, agreed:  “It appears to be a very interesting result, but nobody, including the authors of the paper, seems to fully understand what is going on here.”  Sometimes inventing is easier than explaining.

A second problem with defining technology as applied science lies in science being abstract and technology being concrete.  Applied science bridges that gap, but it is only the bridge.  The engineering process incorporates formulas and laws from science, but goes well beyond them in balancing costs and benefits.  How strong does something need to be?  How long do we have to test it?  What are the costs of designing to far exceed the expected range of use?  These are practical questions that have little do with science and everything to do with actually making something useful.

We are not done with the Hawaiian bobtail squid.  The light from its luminescent bacteria is reflected by platelets composed of an extraordinary protein named reflectin.  Scientists are studying that protein to figure out how it works, which may help engineers create microscopic optical devices.  So, even if the Hawaiian bobtail squid is not using technology, it may inspire some.  We can appreciate the importance of science in arming our engineers in their quest to create useful things, but we are better off without an applied science definition of technology.

 

 

 

 

_________________________

 

I don’t know who discovered water,
but I’m sure it was not a fish.

— Marshall McLuhan

 

Can you imagine trying to explain “water” to a fish?  You couldn’t point at water because, where fish live, it is everywhere.  You have to stand apart from something to point at it.  In the 21st century, technology is to humans as water is to fish.  Opening the chapter with squid, two kinds of bacteria, sea otter, chimpanzee, crow, and beaver was a trick to get us to stand apart from technology and point at it.

What makes understanding and evaluating technology urgent is its rapid change, pointing to a future in which it will be even more powerful.  Whatever your personal conclusions as to whether these or other animals use technology, it is clear that, so far, only humans have consciously changed it.  Instinctual use allows tools to change only as quickly as instincts.  Even imitative use, as chimpanzees and birds demonstrate, keeps tools relatively static.  It is the dynamic nature of technology that makes it interesting.  Carl Sagan would not have warned us of the “times of peril” had technology been frozen at the stage of stone tools. 

But even as we collectively change technology, individually many of us are tricked into the myopia of equating computers and electronic equipment with technology.  Those who can’t see beyond those current specimens are swept along.  While fish have the choice of fighting the current or going with the flow, humans have the further option of guiding its course…if we are aware.  And awareness is what this chapter is about.  It sets the scope for investigations to come about our relationship with technology.

The definition tools that extend our abilities is an important step beyond computers and electronics.  Both stone tools and technologies not yet invented fit this definition.  To prepare our eyes for that yet to come, we recognize the modern trend of technologies fitting within ever more complex systems. 

Perhaps many of the inventions that can stand apart from modern systems were long ago invented.  For modern inventions, survival of the fittest is determined within an environment of systems.  Recognizing technology as systems, the intangible levers, we are more likely to spot future developments.  Other trends suggest that information, the invisible ingredient, is becoming ever more important in and as technology.  Designing, manipulating information, pays better than manufacturing, manipulating material.  Nanotechnology could one day automate manufacturing, making it so inexpensive that what we care about is the information in the design of technology, not the material.

While our criticism of “applied science” might be seen as an exception, our purpose in this chapter is not to arrive at a single, universal, eternal definition of technology.  Rather, it is to provide some thought-provoking answers to the question, to help you come to your own definitions.  Each of the nine chapters in this book has a similar goal.  The question that heads each chapter is nearly timeless, but the answers cannot be—technology changes too quickly.  Picking a single best answer would be no more than an intellectual exercise, so, instead, we offer context as a platform from which to launch.


 

Chapter 2

Why do We Use Technology?

 

 

…a new communications technology…
allowed people to communicate almost instantly
across great distances, in effect shrinking
the world faster and further than ever before.
A worldwide communications network…
it revolutionized business practice, gave rise
to new forms of crime, and inundated its users
with a deluge of information.

Tom Standage

 

The telegraph was unlike anything that had come before.  Suddenly news could travel as dots and dashes of Morse code through a cable in the Atlantic Ocean between Europe and America.  Letters bobbing for weeks on steamships could be replaced by speed-of-light conversations.  Harnessed lightning replaced paper, changing business and crime.  With improved commun­ications, some predicted the end of misunderstandings between countries and the end of war.  As the 19th century came to a close, the unique technology of the telegraph spread its cables like a giant octopus covering the world.

But the telegraph was not unlike anything that had come before.  Other technologies had earlier improved and even transformed communication.  And others, such as the Internet, would follow.  As mind boggling as it was to move from handwritten letters to invisible pulses of electricity, using technology to communicate was familiar.  Writing had transformed communication, as had papyrus, cotton paper, wood paper, printing, and printing with interchangeable type.  The telephone replaced dots and dashes with voice (and seemed so fantastic that telegraph companies rejected the idea).  Radio replaced wires, and satellites extended radio’s range to circle the globe.  The Internet added data in the form of text, graphics, and video to the voices we could already send.  Cellular telephones made sure we could connect nearly anytime and anywhere.

What is yet to come?  Technologies as baffling to us as the telegraph was to those living in the 19th century, and used for many of the same reasons we have always used technology.  Finding something relatively constant in the torrential flow of technology is valuable in this period of rapid technological change.  We picked a dozen categories for why we use technology:

 


1.       Food

2.       Shelter

3.       Communication

4.       Transportation


5.       Commerce

6.       Art

7.       Religion

8.       Health


9.       Entertainment

10.    Organization

11.    Conflict

12.    Exploration


 

In this chapter we give examples for the four categories in bold.  You may think of reasons to use technology that do not fall into one of these categories.  Or, you may find them too specific, and be tempted to generalize them into five or six, similar to the taxonomy of life.

But more important than the specific categories is the benefit of having some categories.  Basic human needs and desires change little, and we can expect that future technology will simply find ever more creative ways to satisfy them.  These categories—or whatever set you adopt—can be a template to place on unfamiliar technologies.

While this may temporarily blind us to a truly new purpose, handled carefully, it will help us past the marketing hype of new technologies.  In most cases, that “completely new, does everything, unlike anything that has ever existed” innovation will satisfy one or several of these common reasons for using technology.  Determining which needs it satisfies will help us find other familiar patterns (e.g. the printing press made it easy to print trashy novels; the web made it easy to publish trashy websites).

In the following sections, we illustrate each category with one or several technologies.  Finding examples was easy—every newspaper or magazine article that mentions a technology includes some implied or explicit reasons for its use.  Deciding which to include was not.  A comprehensive list of all technologies used for a given purpose would be endless—we would have to include every technology in existence.  And a ranked list showing only the most important technology in each category could be predictable and even dull.

So instead, we looked for the most entertaining illustrations for each category.  Do not be disappointed if something as important as the printing press has been pushed out of the limelight by the “high-tech” cigarette, or if we spend more time on Entertainment than Communication.  Neither is a claim of relative importance, but simply an acknowledgement that, elsewhere, the likes of the printing press have received “plenty of ink.”

 


 


For thousands of years, kings, queens, and generals
have relied on efficient communication in order to
govern their countries and command their armies…
It was the threat of enemy interception that
motivated the development of codes and ciphers.

Simon Singh

Communication

On a winter night in 1985, an Iraqi shepherd felt warmth coming from the hill he was sitting on.  The surrounding slopes where his sheep rested were cold, so he was very curious.  Digging into the earth on that remote spot 300 miles west of Baghdad, he found a warm metal tip connected to a machine.  It was connected to Iraq’s main telephone trunk line with Jordan.  A nuclear cell powered it to transmit everything it heard to listeners unknown.  Demolitions experts tried to open it, but it exploded, killing two.

In Saddam’s Bombmaker, Khidhir Hamza reported that, “According to interviews the security people conducted with other shepherds and Bedouins in the area, helicopters with Iraqi markings had unloaded soldiers on the hill a few months earlier.  They’d seen the soldiers digging on the hill, and even heard them talking in Iraqi slang.”  But those soldiers were not Iraqis.  Few neighboring countries trusted Saddam Hussein, but Iraq was sure that Israel, alone, had the capability for this elaborate telephone-tapping operation.

Eavesdropping probably predates writing, but we have historical evidence for the use of secret writing shortly after the development of writing itself.  In ancient Egypt, priests used Hieratic (“sacred writing”) to keep communications secret.  Cryptography, the science of encoding and decoding information, has made use of many technologies, and it has spurred the development of some.

Hidden beneath the rough, dark waters of the Atlantic German U-boats searched for Allied ships to sink.  World War II German naval commanders were so confident of the imperviousness of their Enigma encryption machine that they regularly radioed orders to their subs at sea.  But, within half a day, Britain could figure out where the subs were heading.  How?  British code breakers used Colossus, the first electronic computer (though some call the machine, built from 1500 vacuum tubes, a calculator rather than a general purpose computer).

This is how it worked.  Enigma machines used a typewriter keyboard and electrical connections routed through several 26-sided wheels or rotors, which scrambled the letters that were typed.  On the receiving end, another Enigma machine with identically wired rotors unscrambled them.  When Germany suspected that their codes in their three-rotor machines had been compromised they then added a fourth rotor.

Even with information about the Enigma machines captured by Polish and French resistance fighters, England could not take a brute force approach to figuring out how the Germans wired up the rotors each month.  Even if they could test 200,000 states each second, it would have taken more than 15 billion years, roughly the age of the Universe!

To overcome this challenge, England needed three things:  its Colossus computers, human ingenuity, and human fallibility.  The ingenuity was analyzing the encrypted messages for patterns in the German language, which could show through the encryption—even if so faintly that only a computer could detect it.  The fallibility was the German practice of announcing each victory to every far-flung military unit in precisely the same language.  This gave England multiple copies of a message, each encrypted differently by the same wiring of the rotors.

In the 21st century, our reliance on telecommunications is even greater and encryption has become a political issue.  The National Security Agency in the U.S. works with Britain, Canada, Australia, and New Zealand to monitor and analyze global communication.  Telephone, fax, and computer messages are intercepted by “Echelon” computers, which look for the signature of a terrorist plan or other security threat.  Humans review the most interesting material once it has been filtered down from an immense number of intercepts.

Some European nations complain that this monitoring picks up business information, which is then shared with U.S. companies, giving them an unfair competitive advantage.  As with any arms race, encryption has been improved to foil the Echelon monitoring, but U.S. law prevents export of any encryption system beyond a certain ability (presumably that level above which government computers could not decrypt).

Some communications are easier to monitor than others.  The increasing use of cellular and satellite telephones is broadcasting more conversations into the atmosphere, but, as the Iraqis discovered, even “land lines” are not secure. 

While the Internet and Web are capable of much more sophisticated applications, two of the most popular have been email and instant messaging—simple, quick, inexpensive communication.  Future technology, however strange it may appear, may also satisfy this enduring human need to communicate.

 


 


I felt as comfortable operating on my patient
as if I had been in the room.

Jacques Marescaux, MD

 

Health

How could a surgeon operate in a room distant from the patient?  Cameras transmit views of the patient to the surgeon and remote controls allow the surgeon to operate robotic manipulators.  This technology was developed for surgeons in the same room as their patients because it can be slipped through tiny incisions, which are much less traumatic for the patient than holes big enough for the surgeon’s hands.  Another advantage is that relatively large finger motions can be translated to miniscule knife or probe motions, giving the surgeon much steadier and precise hands.

Why would a surgeon operate in a room distant from the patient?  The surgery you need may have been studied by a local surgeon, but actually performed hundreds of times (successfully!) by a surgeon in another part of the world.  The odds are better with the veteran…if the technology gives the surgeon a good enough feel.

  The key to remote surgery is dividing the process into stages some of which involve only information, such as steps two and four below:

 

1.       The patient is viewed by digital cameras

2.       Information from them is transmitted to a computer screen

3.       The surgeon views screen and manipulates computer controls

4.       Those controls transmit information to robotic “hands”

5.       The robotic hands interact with the patient

 

Our global communication network is good at transmitting information anywhere.  As long as the cameras and robotic manipulators are in the same room as the patient, the viewing screens and controls are in the same room as the surgeon (and the system does not crash), then the distance does not much matter.

Computers can further change the motion of manipulators by incorporating the typical movements of recognized experts in each surgical area.  These expert systems imitate the best practices, allowing them to be used even when the experts are not present.  Eventually, this may go beyond minor modification of surgical movements with computers performing surgery on their own.  An attending surgeon would switch on an “auto pilot,” much as airline pilots commonly do today.

Still very expensive, computer-assisted surgery is not yet bringing the best of surgery to poor areas that lack any form of it at all.  Do benefits once reserved for the few ever trickle down to the many?  Well, in 1836, one of the richest people on earth died from something that, today, any pharmacy with antibiotics could cure.

At the time, germs—the invisible creatures that we so carefully sterilize from open wounds and surgical instruments with heat, alcohol and high-tech substances today—were not yet discovered.  So Nathan Rothschild, an otherwise healthy 59-year-old banker, died of a simple infection from an abscess or boil—or from the surgeon’s attempt to open it with a non-sterile knife.

Medical technology was primitive by current standards, and did not include the antibiotics that could have saved him.  And current antibiotics are primitive compared to the eventual products of biotechnology.

Rothschild could not have imagined the reach of current medical technology.  Cochlear implants are electronic devices placed in the inner ear, or cochlea, that bring hearing to the deaf.  They convert sound waves into electrical impulses to stimulate nerve endings.  For those with nerve damage in the cochlea itself, newer implants connect directly to the brain stem.

To test this technology, a cat in California has a brainstem implant for hearing and a person has a brainstem implant to control a cursor on a computer screen.  That person suffered from a brainstem stroke and lost use of his hands, but he can still interact with a computer, which picks up his thoughts on wires that pass through his skull to the implant.

Advances borrowed from other areas—for example the sophisticated audio analysis and signal processing done by spies at the KGB, CIA, and NSA—could make future cochlear implants superhuman.  Expect that the future will bring more and more technology to satisfy our quest for health because the consumers with the most disposable income have many of their other needs, such as food and shelter, already met.

 


 


The constants all through the centuries
will be the same: wine, women, and song.
Other than that, life will be very different technologically.

Phyllis Diller

 

Entertainment

Smoking tobacco, which we categorize as a form of entertainment (albeit, an addictive and dangerous form) has its share of technology.  The cigarette rolling machine, invented in 1881, helped cigarettes eclipse pipes and chewing as the most popular form of tobacco.  But as important as that mechanical technology was, chemical technology is the key to cigarettes’ insidious power.

Here is how it works.  Two types of tobacco go into most cigarettes:  reconstituted and puffed.  Puffed tobacco is made from tobacco leaves saturated with freon and ammonia before they are freeze-dried, which doubles their volume.  Reconstituted tobacco is made from tobacco stems and parts of the leaf that cannot be used in puffed tobacco.  These are pulped and then sprayed with hundreds of chemicals including nicotine, which is also found naturally in the leaf.

Nicotine is the most important chemical in cigarettes.  Highly addictive, it diminishes appetite, affects mood, and can, at least temporarily, improve performance.  Tobacco company laboratories developed chemical additives to improve delivery of nicotine.  Ammonia, for instance, makes more of the nicotine vaporize when heated by the burning of the cigarette.  Vapor can readily travel to the lungs, where the nicotine accompanies oxygen into the blood, which flows to the heart (which can speed 10 to 20 beats per minute with the first nicotine “hit” of the day), which pumps it to the brain.

In the brain, nicotine affects neurons, the nerve cells behind our thoughts and feelings.  Specifically, nicotine mimics chemicals that neurons use to communicate with each other, over-stimulating the neurons with many false signals.  By interfering with normal neuron communication, nicotine can alter mood, often pleasurably.

Since the brain, like most life, is highly adaptable, it accommodates to the over stimulation of neurons.  As a result, when a smoker stops smoking, the brain initially perceives the normal level of stimulation that resumes as inadequate.  The unpleasant symptoms the former smoker experiences are called “withdrawal,” and they can be alleviated by resumption of smoking.

A Spanish historian noted the addictive nature of cigars in 1527.  When science caught up with conventional wisdom and declared smoking dangerous to one’s health, the highly profitable industry started working on a “safer” cigarette.

The first danger their scientists took on was tar, one of the many chemicals that smoking introduces into the lungs.  They created filters, air holes that dilute the smoke with fresher air, and low-tar blends of tobacco to reduce the amount of tar going into smokers’ lungs.

But they also reduced the nicotine, which smokers’ brains were finely attuned to.  Just as your brain can adjust your throw when the ball falls short of the basket, smokers’ brains adjusted the puffing when the nicotine fell short of the “norm.”  By inhaling deeper, covering the air holes with fingers or lips, or smoking more cigarettes, smokers were able to get their accustomed nicotine levels.  This also restored the previous levels of tar.

The AccordTM cigarette, introduced in 1998, is a “high tech” approach to safer cigarettes.  The smoker inserts one end of a special cigarette into the microchip-based heating unit.  Because the tobacco is not burned away into ash, the unit has a liquid crystal display (like those found on watches and calculators) to indicate how many puffs remain in the cigarette.  After each pack of cigarettes, the smoker must recharge the batteries in the Accord.

 

The Irony of “Safe” Cigarettes

Heating tobacco, as the Accord does, rather than burning it, as conventional cigarettes do, produces no carbon monoxide or secondhand smoke.  Carbon monoxide is also found at the tailpipes of cars, and can be fatal when a car is run in a closed area, such as a garage.  Secondhand smoke has led to many state laws prohibiting smoking in public buildings and even at outside areas like theater lines or building entrances, where smoke might be drawn inside.  And yet, manufacturer makes no health claims about the Accord.  There is a good reason for this.

The technological problems with “safe” cigarette are dwarfed by the political problems, a pattern we will see with other technology.  Tobacco companies worry that developing and selling “safer” cigarettes would be viewed by courts as an admission that other cigarettes are not safe.  The lawsuits have stakes in the billions of dollars.

Further, tobacco companies are concerned about regulation by the U.S. Food and Drug Administration (FDA).  Current tobacco products are exempt from FDA scrutiny due to a “grandfather clause” under which a new law does not affect someone (or something) that preexisted the law.  But a new class of “safe” cigarettes might not fall under that clause.  The irony is that the safest course for tobacco companies—if not for their customers—seems to be to avoid “safe” cigarettes.

 

In the movie Sleeper, Woody Allen plays someone cryogenically frozen and then thawed in the future.  There, a favorite form of entertainment is touching a metal ball that makes you feel good.  Humans already spend lots of resources on feeling good, and any technology that effectively and efficiently does that will be in demand.

 


…social groupings larger than 150-200
become increasingly hierarchical in structure…
There must be chiefs to direct, and a police
 force
to ensure that social rules are adhered to.

— Robin Dunbar

Organization

Why would we need technology in order to organize?  An answer comes, circuitously, through a story about the brain’s neocortex.  And we start with chimpanzees.

Take three chimps in the same band.  Each chip is aware of his relationship with the other two and of the relationship between the other two.  Who is dominant, who has done favors for whom, who can be trusted to repay favors, and who cannot?  Chimp decision-making has been observed in the wild.  Two chimps may team up to attack another chimp to steal food…unless the victim is near others who may come to his aid.  Any chimp for whom the victim has performed a recent favor, such as grooming, is suspected of being a supporter.

Social animals keep track of their relationships with other group members and they also keep track of the relationships between them.  In highly social groups, it is a matter of survival to know how others will interact.

As groups become larger, there are more relationships to track.  With just two individuals there is just one relationship.  With three individuals there are three relationships (shown as arrows below) and with four there are six:


 


Metcalfe’s Law

Chimpanzees in a tribe form a network.  In a network, the number of possible one-to-one relationships is proportional to the square of the number of individuals.  Doubling the tribe quadruples the number of possible relationships.  Multiply the tribe size by three and the possible relationships multiply by nine.

People, telephones, computers, and railroads form networks, too.  Robert Metcalfe suggested that the value of a network is proportional to the number of possible relationships, which is the square of the number of nodes.  Those who accept this “law” as true are equating value with possible connections.

Network technologies such as telephones, fax machines, pagers, cellular phones, and email accounts have grown slowly at first, but accelerated suddenly once they reached some critical mass.  The first person to own a telephone could do little with it, but today, having a phone is indispensable because it can make so many connections.  When just a few university scientists used email, it had little value to most people.  Today, many rely on email for both work and play.

 

A primate study has shown that the size of the brain’s neocortex correlates with the size of the social groups in which that individual lives.  Based on the pattern found in non-human primates, the human neocortex suggests a maximum group size of 147.8, or about 150.  In a group of 150 individuals, there can be 11,175 such relationships, too many to memorize as a list, but not unreasonable to learn in context.  A soap opera aficionado has no trouble remembering which of dozens of characters hate each other, for instance.

While that number of 150 still guides the size of clans, military “companies,” fraternities, and church congregations, technology has steamrolled over it with cities.  Technology concentrated people with agriculture and then united disperse populations with various communication technology.  How do we cope with more relationships than we can remember?

We define relationships with technology.  Stoplights and traffic systems define relationships with other drivers.  Uniforms define relationships with police, fire, and medical personnel.  A judge’s black robes are symbolic of the system defining our legal relationships.  Symbolic language, writing, and computers help us manipulate, store, and transmit symbols of our relationships.

Still, there is stress from living and working with so many people.  How can technology help our relatively unchanging biology adapt to our increasingly complex surroundings?  Imagine knowing everyone you see.  Unless you live in a small village, you would probably have to be augmented by technology.  Technology will eventually be able to identify everyone you come in contact with, providing you information about whom they are related to, what their interests are, what they do for a living, and what their friends or former friends say about them.  This information could be displayed on your glasses or contact lenses so that walking down a busy sidewalk, there would be no strangers.

Do we want village life with 10 billion people?  Whether we want it or not, technology is enabling this level of familiarity.  Already, glasses exist that project information on your field of view, like the heads-up display used by fighter pilots, helicopter gunship pilots, and the drivers of some cars.  Attached cameras, wireless links to remote computers, and image recognition software exist and their commercialization is not far off.  Once the glasses are reduced to contact lenses our remarkable familiarity with each other will appear quite natural.

To complete this utopia—or nightmare—we need only add “Big Brother” databases to store and cross-reference all this information.  In 2002, the U.S. Information Awareness Office was formed to do just that in order to fight the war against terrorism.

As with each category we have touched on, it has been only a touch.  Many other technologies satisfy our desire and need to organize.  The web, for instance, organizes people through chat rooms and virtual communities.  Terrorists are known to have used the web to organize their efforts.  Humans are social animals so it seems inevitable that we would use technology to organize.

 

_________________________

 

 

Why do we use technology today?  For many of the same reasons we have always used it.  Perhaps for the same reasons we always will use it.  In our quest to understand and evaluate technology, “why do we use it?” is a powerful question because it so quickly categorizes even those things we do not yet understand. 

Just as science has categorized life into taxonomy, we may categorize the reasons we use technology.  The diagram below shows a standard classification (or taxonomy) of living things into five kingdoms, and then traces humans through the increasingly specific levels of phylum, class, order, family, genus, and species.

 

 

Text Box: species     genus      family    order     class   phylum    kingdom
 


The diagram also shows how we might start to classify the reasons we use technology.  We are not saying that technology fits into the classification of living things, just that a similar approach may help us understand why we use it.

The classification of living things has changed over time.  It started with just two kingdoms:  animal and plant.  Fungi earned a third category and then science discovered life worthy of additional categories.  Some scientists suggest a 6th kingdom (archaea, an ancient form of life that evolved separately from bacteria and blue-green algae), showing that classification does not lead to a single, obvious, and universally accepted answer.

As useful as our map may be, there is danger we might confuse it  for reality, attempting to force any new technology into one or several of our categories.  If it did not fit, we might ignore it, believing that anything outside a category is unimportant.  Or we might miss an additional use for a technology simply because it satisfies some other need so well (e.g. a hammer as art).

Still, just because a tool can be misused, does not mean that it cannot be properly used.  Our tool is, in a sense, itself a technology, one used for exploration.  And, like all technologies, our categories are a double-edged sword.


 

Chapter 3

Where Does Technology Come From?

 

 

An ape may on occasion use a stick
to beat bananas
 from a tree;
a man can fashion the stick into a cutting tool
and remove a whole bunch of bananas
.
Somewhere in the transition between the two,
the hominid, or the first manlike species
, emerges.
By virtue of his nature as a toolmaker,
man is therefore a technologist from the beginning,
and the history of technology encompasses
the whole evolution
 of man.

Encyclopedia Britannica

 

Imagine a time when technology did not affect our lives, a time before the Internet, microprocessor, atomic bomb, and airplane.  Those technologies take us back to the beginning of the 20th century, but we have to go farther back…before the automobile, telephone, light bulb, electric generator, steam engine, telescope, printing press, and clock.  That takes us back before the 2nd millennium, but we have to go farther back…before the plow, wheel, lever, and stone tools.

That time, when technology did not live with us—changing how we live—predates history.  It is hard to imagine 2.6 million years, so we compress it to just one year, ending today, illustrated on the next page.

Just as January 1 opens our imaginary year, we invent stone tools.  Later that month, around about the 29th, we invent the wedge, useful for prying things apart.  These technologies are enough to keep us busy—refining and improving, maybe even losing and rediscovering—until the afternoon of October 22, when we perfect the trick of starting and controlling fire.  Or at least that was when we first leave enough evidence to convince later archeologists of our accomplishment (long before this we made use of naturally occurring fires).  Good job, everyone.  Take the rest of the year off.  December will be busy.

Christmas Eve we invent the bow and arrow to hunt.  Less than a week later, on December 30, we create wind musical instruments, but we save everything else for the next day.  Before dawn on that last day of our imaginary year, we invent the plow and the wheel, so we’re producing food surpluses and soon carting them about.  Those food surpluses allow for specialization of labor, so before lunch we plumb our first bathroom and, at lunch, invent glass.

A minute before quitting time on December 31—4:59 PM—we find our bearings with the magnetic compass and decide there’s more to do.  By 8:30 PM, a mechanical clock tells us how late it’s getting.  By 10:41 it is quite dark, and we view the stars with a telescope.  The steam engine and electric battery appear around 11:15.  With less than half an hour left to go in the year, we shrink the world with telephone, automobile, and airplane.  The power of the atom succumbs to our investigations at 11:48.


 


The epidemic spread of the integrated circuit and microprocessor start a few minutes later, leaving us just a couple of minutes to experience the World Wide Web.  Then, in the very last minute we develop the ability to create artificial life, a 300-gene virus, assembled gene-by-gene in a laboratory.  For ethical and moral reasons, we postpone its actual creation, but we do finish decoding our own genome.

Oh yes, in the final nanoseconds (each billionth of a second corresponding to about one of our real days), you start reading this book, which answers questions including, “Where does technology come from?”  And to this question, we offer five answers.  Technology comes from:

1.       Other technology

2.       Dense populations

3.       Specialization

4.       Plan or Accident

5.       Protection.

 

Technology comes from other technology.  In a sense, stone tools made possible all technology that would follow.  While we cannot make a microprocessor with stone tools, we could not make one now if stone tools had not started the process.  Stone tools had to be somewhere before the axes and spears, sewing needles, rope, pottery, fishing nets, and baskets.  With stone tools, we were able to make better stone tools and also sharpen or carve wooden tools.  With those, we made still better tools, slowly at first, but accelerating to and through the present day.

Even the idea of creating technology must have been easier to conceive once we started using stone tools.  Of course the same is true for many technologies that extend our physical abilities to create and spark our thoughts of something similar but better.

Dense populations accelerated the pace of innovation near the end of our calendar.  Early on, with tool-using humans grouped in small, dispersed bands, one invention had little chance of being seen by another inventor who might improve upon it.  If one person invents the hand ax and another person is skilled at lashing things together, their combination could result in a conventional ax with handle (giving it the lever advantage we enjoy today).  But, if those two people are isolated, neither benefits from the discoveries of others.  No synergy.  Progress is made only by repeated advances within each community.

Specialization further accelerated innovation.   Coming just before dawn on the last day of our calendar, the plow created a surplus of food, allowing us to specialize.  When everyone needs to hunt and gather in order to eat, then crafting pots, knives, and other tools can only be done in spare time.  But agriculture created a surplus, allowing some to exchange part of that surplus for technology made by specialists.  Those specialists had time to create all sorts of new technologies, including those making agriculture more efficient.  This creates yet more surplus and opportunity to develop even better technology.  Feeding on itself, this process has led to the point that farmers, once representing near 100% of every society, now represent just a tiny percentage of the developed world—2.5% of the U.S. population.

Both plan and accident have always fostered innovation.  We have stumbled across inventions (probably the first stone tools) and pursued them (the atomic bomb during World War II).  It is, of course, hardly profound to say that technology is either intentional or it is not.  More interesting are the characteristics of the environments that contribute to one rather than the other.  Some environments focus resources on planned development.  For example, war led to both the atomic bomb and the modern pencil “lead.”  Other environments allow a diversity of efforts, which can result in more accidental discoveries.  The laser is an example.  Developed in a peacetime free market, it was invented to create very high frequency radio waves.  No one anticipated using it to read music from compact discs or reshape corneas for better sight. 

If you spend resources on inventing a new technology, you might prefer that someone else not steal your idea.  If the threat of losing out on the benefits you feel you deserve is strong enough, you might not go to the trouble of inventing in the first place.  Protection through patents and similar government intervention are intended to encourage innovation by assigning rights to the information behind inventions.

Two factors now make the protection of information increasingly important to the creation of technology.  First, information travels ever more quickly around our world and, second, ever more technology is composed mostly of information.  The value of both computer software and engineered drugs is less their raw materials (e.g. a compact disc or pill) than their design (the development of which constitutes most of the technology’s cost).  Awareness of this pattern is important in unraveling the debates about unauthorized music sharing and about the costs of AIDS drugs in the developing world.

Understanding where technology comes from is part of our quest to understand and evaluate technology.  This chapter’s question is one of the four blocks in the foundation of Identity (along with what is it, why we use it, and how it works).  Part of being able to understand and evaluate technology is the ability to encourage innovation in our organizations or communities.  The patterns we discover in where technology comes from are not passing fads.  In fact the first one has been in play for millions of years.

 

 

Thirty thousand years ago,
chipping
flint was the
high technology of the day.

Eric Drexler

Other Technology

Apes and birds use sticks as tools.  Chimpanzees throw stones and use them to crack open nuts.  But there is a crucial difference between the stone tools of our ancestors and sticks or simple stones.  Stone tools have intentional shape, usually a sharp edge.  With sharp edges, stone tools can sharpen sticks into spears, slice animal hides into clothing, or cut animal sinew with which to sew those hides together.  Stone tools can create other tools, which can create still more tools, leading to a cascade of improvements.

Few stones lying around have sharp edges useful for cutting.  We can create such an edge, though, by striking stones together.  A chimpanzee can do this if trained, as scientists demonstrated by presenting Kanzi, a talented Bonobo chimpanzee in captivity, with a food treat enclosed in a box tied with rope.  They demonstrated how to hit two stones together to flake off pieces, some of which were sharp enough to cut the rope.  Kanzi learned this and even tried his own technique of throwing one stone against the other.

What Kanzi did not learn is something that we did two million years ago.  Randomly smashing rocks together, which also happens in nature when they fall from cliffs or rivers pound them together, does not produce good cutting edges.  To get those, you must study a stone to figure out the best angle of striking.  Then, with each strike breaking off flakes, you must adjust your angle.

Few humans have that skill today, but we would have the cognitive ability to develop it if we needed it.  Over months of training, Kanzi—who can do something as sophisticated as tying his shoelaces—did not develop it.  What he created with human coaching was like the very first stone tools we have found.  The first three steps in our development of stone tools were:

·         Stone chips (3 to 2 million years ago)  These often look similar to rocks broken by natural causes (e.g. crashed into each other by a rushing river).

·         Olduwan stone (2 to 1.5 million years ago)  They are clearly intentional because the flakes and the core pebble from which they came are found together.

·         Handaxes and Levallois flakes (1.4 million to 250,000 years ago)  Easily recognizable as tools, handaxes—what we would call an “ax head”—would later evolve into a more familiar ax when strapped to a wooden handle.  Levallois flakes showed forethought because the technique involved careful preparation of the core of a rock in order to get cutting flakes of a predictable size.

Building on itself, the tool-making process blossomed to transform our world, but it is hard for us to imagine how slowly this transformation began.  In the 21st century, some technology changes every few years.  If a computer comes with half the memory you want, just wait two years for the next model.  It will have double the memory (and processor speed and disk size) at the same price.  Or, get the old model at near half the price.

We are becoming accustomed to technology flowing like a swift river, with improvements arriving constantly.  Waiting thousands of years for a small improvement in technology would seem, to us, forever.

Before stone tools existed, they must have been very difficult to imagine.  Further, they were not trivial to make.  A trained chimp, capable of much else, apparently lacks the hand coordination and mental forethought to chip stones into the specialized tools made by our ancestors.  But once the tool-making process started, it built on itself.  Tools made it easier, both physically and mentally, to create better tools.

Physically, the tools gave us capability, such as sharpening sticks, which our bare hands did not provide.  Mentally, the tool showed us by analogy what might be created.  The more examples of tools we saw, the more likely we might think up another.  Today, invention often comes of imagining something “a little like this thing and a little like that.”  Seeing that music or other audio can be recorded on cassette tapes and CDs suggests that video should not be restricted to VHS tapes.  Hence, the DVD (which looks just like a CD).

 

Seeding the Crystal

How could developing stone tools have been so slow?  Were humans that much less intelligent back then?  In answer to that question, consider the island of Tasmania and an old TV show, MacGyver.  As anywhere, Tasmania suffered from occasional famine.  Unusual, however, was that for about 4000 years Tasmanians, surrounded by rich oceans, did not fish.  They did not think of fish as “food,” much as most Americans and Europeans rarely think of insects as food, even though people in many parts of the world recognize them as highly nutritious.

Now, on to MacGyver, the television show about a resourceful hero who uses a paper clip to short-out a nuclear missile, a chocolate bar to plug an acid leak, and a cold capsule to trigger a homemade bomb.  What did he have that most people with easy access to paper clips, chocolate bars, and cold capsules lack—other than life-threatening situations every week?  Information.  Most of us lack information about missiles, acid, and bombs just as the Tasmanians lacked information about fish.  Similarly, before we invented stone tools, we lacked information about stone tools—a seed for technology.

Our vast interrelated network of technology is like a crystal.  The molecular components of crystals can float around in liquid (non-crystalline) form until they come in contact with a “seed” crystal.  This seed is literally a few molecules that have already been stacked into a crystal structure.  These cause more molecules to come out of solution, adding themselves to the structure.  Integrated circuits are made from a giant silicon crystal grown from a tiny silicon seed.  Stone tools may have been the conceptual seed from which all technology since has grown.

 

This mental and physical effect is in play today.  Computers extend our brain as stone tools extended our brawn, so we may be able to apply our insights from ancient history to the present.  Physically, new computers allow us to design tens of millions of transistors onto a chip of silicon the size of a postage stamp.  These chips will power the next generation of computers, allowing design of even more complex circuits.

Mentally, computers have become our starting point for wondering, “what’s next?”  Computers model a whole new form of tool, one that operates on information, follows rules, and even appears to think.  When the computer Deep Blue beat world chess master Gary Kasporov in 1997, it was applying rules, millions each second.  Manipulating huge amounts of information is becoming commonplace, and people learning about their environment today will assume that capability as a basic building block toward new technology.

If advances in technology came about, in part, because stone tools were seen as basic building blocks, then we may, by analogy, expect fantastic technologies in the future from those who now view computers as basic building blocks.  It is hard to imagine what will be created when the most fantastic technology we have today is taken as a given, just a starting line.

While it is hard for us to imagine being ignorant of basic tools, it is easy to imagine being ignorant of missiles, acids, and bombs.  How long would it take us to figure them out if we were left on our own?

The key to the next source of technology is that we are not left on our own.  We share ideas and build on each other’s ideas.  The more contact we have with each other, the more likely one person’s idea will trigger someone else’s improvement.  One effect of stone tools—which helped us survive by helping us hunt and by protecting us from predators and the elements—was population growth.  And that led to denser populations.

 

 

More and denser population
means more advanced technology.

Robert Wright

Dense populations

One good technology leads to another.  Even Thomas Edison, one of the most prolific inventors of all time, built on the inventions of others.  The first incandescent light bulb was invented in 1802; Edison was born in 1847; he invented the first practical incandescent light bulb in 1879.  His bulbs did not quickly burn out for two primary reasons.  First, he tested variations on the earlier filaments, finding some that resisted burning.  And, second, he used the latest and best vacuum pumps to evacuate the air that would have allowed the filaments to burn.

But what if Edison had been isolated from the many earlier bulbs and from advances in vacuum technology?  What would he have invented?  And even if he had been brilliant enough to invent these precursors and then proceed to his light bulb, what about the technologies on which those precursors were built?  As we just saw, those go all the way back to stone tools.  Fortunately for Edison and those of us who enjoy something brighter than candlelight, he lived in an age when dense populations transmitted knowledge of inventions far and wide.

For a simpler illustration consider the hand-ax, which is a sharpened stone cradled in the hand.  These have been around for about 400,000 years.  Someone—or, actually, many different, isolated people—invented them.  Eventually, others figured out that attaching them to a handle of wood or bone protected your fingers from being smashed and provided a lever for applying greater force.  The axes we buy in a hardware store all have handles.

Did the same person who invented the hand-ax also think to attach a handle?  No, not unless he or she lived for 380,000 years: the technology of hafting, or slotting a handle to cradle a stone ax head, is just 20,000 years old.  Some people invented hand-axes; others invented hafting.  The denser the population, the easier knowledge of one invention could be communicated to the potential inventor of the next.

When populations are sparse, ideas don’t bump up against each other as readily.  When Europeans discovered Australia in the 18th century, they found the native aborigines living as pre-agricultural hunter-gatherers.  But southeast of Australia on the island of Tasmania, native technology was even more primitive, comparable to what Europeans had in the Stone Age more than ten thousand years ago.  While Australians had at least hooks, nets, and the ability to both sew and start a fire, the Tasmanians lacked even these.  Tasmanians also lacked bone tools, something developed 90,000 years earlier in Zaire (as harpoons) and 40,000 years earlier nearly everywhere else.

Why did the indigenous peoples of Australia and Tasmania not develop technology common in other parts of the world?  One answer is that they lacked the dense populations that would have allowed easier sharing of information.  Australia had about 300,000 people and Tasmania about 4000.  With a smaller “communal memory,” it is more difficult to connect two seemingly unrelated facts into a new technology.

It is even possible to forget technology.  Archeologists have found bone tools, needles, and tools for fishing from about 3500 years ago on Tasmania, but none more recently.  In his book Guns, Germs, and Steel, Jared Diamond presents the theory that technology once used on Tasmania was lost.

 

Chain Reaction

An analogy from physics is a nuclear reaction.  In a power plant, the uranium atoms, which constitute the fuel, emit high-speed particles, which strike other uranium atoms, causing them to emit more particles.  Inserting rods, which simply absorb the high-speed particles, preventing them from triggering further emissions, controls this chain reaction.  Translate high-speed particles to new inventions and rods to geographic isolation.

In Tasmania, you might have been a genius, but if the other 3999 Tasmanians were not clever in just the right way, your invention would progress only as far as you pushed it.  No chain reaction.  Unlike high-speed particles, technology can have long lives.  As long as enough people value a technology, it can be replicated by succeeding generations, waiting for the next clever inventor.  But if even a single generation loses interest or the ability to fabricate it, it can be lost, just like a high-speed particle shooting off into space.

 

Population grew, in part, because spears, sewn clothing, axes, and other technologies protected us from claws and cold.  In his book Non-Zero: The Logic of Human Destiny, Robert Wright theorizes that population density in Africa and Eurasia began to increase about 40,000 years ago because the growing population ran out of empty, habitable land, and was forced to simply pack in closer.  Around this cusp, the rate of technological change increased from one major innovation every 20,000 years to one every 1400.  Another cusp was reached about 12,000 years ago, about when the harvest sickle and fired clay pottery were invented.  With proto-agriculture and the means to store surplus, the rate increased to one every 200 years.

In California’s Silicon Valley, Massachusetts’ Route 128, and other geographic concentrations of technology companies, we find a similar relationship.  The density of talent allows for ideas to trigger other ideas.  Employees jump from one company to another, cross-pollinating as they go.  Nearby universities provide basic research and even more ideas.  Companies merge and combine technology, paying a premium to be located in these centers of innovation with their density of ideas and talent.

In this section, we care about dense population only because it facilitates communication.  But technology now does that independent of geography.  The printing press shared ideas across Europe and then beyond.  Radio and television made this one-way communication far faster, and telephones and the Internet made it two-way.  New technology is making our interactions even more “just like being” there.  And so more innovation will be spurred by virtually—not geographically—dense populations.

 

Again and again, people with access to
the prerequisites for food production,
and with a location favoring diffusion
of technology from elsewhere,
replaced peoples lacking these advantages.

– Jared Diamond

Specialization

More than 10,000 years ago, the advent of agriculture became the most important creator of dense populations.  Ten to 100 times as many farmers than hunter-gatherers can survive in a given area, but agriculture had a much more interesting effect on human society than simply allowing it to become denser.  It gave birth to the specialist, who often earned his or her food by developing and creating technology.

The last Ice Age peaked about 18,000 years ago, and its waning forced us to adapt to a new environment.  Rising oceans reclaimed vast coastal areas that the earlier Ice Age had exposed (one estimate submerges 40% of all the land that was dry during the Ice Age).  Jungles heated up and dried out.  Grasslands replaced the dense forests that had been full of animals to hunt.  The area of southwest Asia we know as the “Fertile Crescent” looked anything but fertile to our hunter-gatherer ancestors.  Grasses were not food for humans…and then they were, after two things happened completely beyond the understanding of anyone at that time.

First, two plants crossed genetically, creating a mutation.  The fertile combination of wild wheat and a natural goat grass provided a grain sufficiently plump to be worth harvesting.  Our ancestors supplemented their diet with it and, eventually, must have learned how to plant it, too, because they were ready to exploit the second surprise.

This surprise—a second mutation—arrived about 2000 years after the first one.  It made the wheat even plumper and more attractive, but so tightly packed that it was no longer able to re-seed itself with wind alone.  If humans had not already learned how to sow seeds, the new mutation would have died out.  But it had something we wanted:  a big, nutritious seed.  And we had something it needed:  the technique of spreading and planting its seed.  By helping it survive, we changed it.  By breaking us from the often-nomadic life of hunting and gathering, the wheat changed us.

Around 10,500 years ago in southwest Asia, our harvesting of wild cereal grains became what we might call agriculture.  Less than a thousand years later it developed in China.  In the New World, it played a lesser role, probably because fewer native crops were as attractive for cultivation (corn and little barley, but no wheat or rice, which were key in southwest Asia and China).  The symbiotic relationship that is agriculture changed technology, which spread to and affected neighbors.

We had used our sickles, baskets, and grinding tools to plant, harvest, and process wild cereal grains, but that genetically altered wheat led to countless more tools because it allowed us to produce surplus.  If you have a surplus, you can trade. You need to store it, probably in the clay pots invented 12,000 years ago.  How do you keep track of who produced which food and who is trading what with whom?  Our answer was with writing, at first carved into clay, later on papyrus and then paper.

Since you want to protect the surplus, you may need soldiers with weapons.  The surplus also fed a government that coordinated everything and craftspeople that developed improved tools for planting (e.g. the plow), harvesting, storing, fighting, and worshipping (which left the biggest artifacts, such as pyramids).  The greater the surplus, the more people could specialize in something other than farming...so long as that activity produced something that people wanted.  Specialization focused us on improving our tools.

Without surplus and specialization, improving tools is done in spare time.  However, if your entire job is to create tools, you are much more likely to figure out how to improve them.  Then, the improvements can start building on each other, accelerating technological progress.  Ancient Egypt built these technologies atop agriculture:

 

Irrigation canals:          Agriculture.  Water trapped in reservoirs during the annual flood was distributed by canal to the fields during the ensuing drought.

Plow:                             Agriculture.  Improved prior methods of opening the land by hand or sharp stick in order to plant crop seeds.

Calendar:                      Agriculture.  Prediction of the seasons was critical for knowing when to plant seeds.  Observing annual patterns in weather gave the Pharaoh seemingly god-like powers of predicting the annual flood of the Nile River basin.

Wheel:                          Agriculture in the form of the potter’s wheel (3500 B.C.) to make containers for grain.  Also used for transportation on funeral vehicles (in Sumeria), though not for building the pyramids, where skids were used for dragging the giant blocks.

Writing implements:  Communication and keeping track of food stores and transactions (writing on wet clay led to papyrus in 2500 B.C.)

Loom:                           Shelter and clothing.

Cutting tools:               Building (canals and pyramids) and for making other tools.

Simple metallurgy:       Making metal tools and for religion (jewelry, decoration).

 

Stone tools let us create better tools, denser populations helped us share ideas, and agriculture gave us surplus, allowing us to specialize and create even more surplus, more tools, more wealth.  How do we apply that wealth?  What technologies result from it?  At times we direct it toward a single objective.  At other times we seem to all go off in different directions, and then select the best results.  Both approaches are active today, with war often focusing resources and peace diversifying them.

 

 

Plowing It Back In

Agriculture made development of these technologies possible because its surplus could be reinvested.  By analogy, consider a simple bank investment with interest.  If the interest is plowed back in the investment grows exponentially.  One hundred dollars invested with a 10% return becomes $110 after one year, $121 after two, $133.10 after three, and $259.37 after 10.

Agriculture plows its surplus back in by spending it on specialists who develop improved, more efficient technology.  This creates greater surplus, which can be plowed back towards even greater technological development, and so on.

 

 

Either history is a series of
individual and unrepeated acts
which bear no relation to anything
other than their immediate and
unique temporal environment,
or it is a series of events triggered
by recurring factors which manifest
themselves as a product of
human behavior at all times.

James Burke

Plan or Accident

Near the end of the 18th century, when war with Britain cut France off from high-quality graphite mines, French scribes could have been reduced to quills and inkwells. Government and commerce relied on scribes to write down laws, transactions, agreements, and plans.  And they did this most efficiently with graphite wrapped in wood, commonly known as a “pencil.”  Napoleon declared the invention of an alternative pencil to be a national priority.

In 1795, Nicolas-Jacques Conté, combined readily available lesser-quality graphite with clay to produce a superior pencil “lead.”  Not only did this save French wartime administration and commerce, but this new pencil lead proved even better for writing than pure graphite because varying the proportion of the two ingredients controlled its hardness.  For the first time, artists could have their soft pencils and architects their hard ones.

Finding a good alternative to pure graphite was quite intentional, but the benefit of varying lead hardness was an accident.  Much technology comes accidentally.  Earlier in this chapter we mentioned the laser and its unexpected application to reading CDs and eye surgery.

Plan and accident also work in combination.  The transistor was a planned improvement on the vacuum tube.  But the inventors of the transistor based their work on the germanium crystal rectifier, which was discovered entirely by accident decades earlier.  The first radios were “crystal” radios, based on the germanium crystal rectifier, and in use long before the first vacuum tube or transistor (something we touched on in Chapter 1).

Another example:  by chance, Alexander Fleming discovered penicillin, the first antibiotic, in 1928.  Production was minuscule until World War II, when thousands wounded in battle were dying of infections that could be cured by it.  By plan, production then became a factory operation:  28 pounds were produced in 1943 but seven tons were in 1945.

In a story that returns us to Napoleon, plan led to accident.  In his continuing international adventures, he needed better food supplies for his far-ranging troops, so he announced a national prize for a solution.  Nicholas Appert claimed it in 1810 by stuffing empty champagne bottles with fruits, vegetables, milk, and meat before sealing and cooking them.  The heat killed bacteria and “bottled” food fed French armies and navies.

Through several coincidental meetings and relationships, bottled food led to Bryan Donkin and several other British entrepreneurs, who replaced the glass champagne bottles with tin cans.  Nearly two centuries later, canned food is still common.

Technology comes from both plan and accident.  Plan may focus resources on goals, but, before we’ve invented something, we often don’t know just how to do it or whether that something is possible and even desirable.  Chance works with a diversity of explorations, so that many can fail while a few produce surprising results.

 

Tactics of Bacteria and the Drugs They Battle

The development of technology by plan and chance are comparable to the different approaches taken by pharmaceutical manufacturers and the infectious bacteria they develop drugs to attack.  Drug development is typically focused on attacking a particular disease.  In response, the bacteria have no focus or central plan.

In their diverse multitude, bacteria simply mutate and trade genes with other bacteria.  Natural selection culls out the trillions of unfit and rewards the fitter millions.  Because the bacteriological strains resistant to our treatment no longer have to compete with their relatives we killed, they can reproduce into their original trillions.  And so medicine goes back to the laboratory to develop a new weapon.  Evolution led to the intelligence that focuses resources, and it continues to operate by pitting rapidly mutating bacteria, evolving a diversity of possibilities, against that intelligence.

 

Several of our examples for plan have come from war, which focuses resources to support grand plans.  The Cold War contained the Space Race, which focused Soviet and American resources as a conventional war would have.  It kicked off in 1957 with the Soviet Union launching Sputnik, the first earth-orbiting satellite, and started to wind down in 1969 when the U.S. landed men on the moon.

 

At base, the momentum for the arms race
is undoubtedly fueled by the technicians
in government
 laboratories and in the
industries which produce the armaments.

— Solly Zuckerman

 

Some suspect that war is used as an excuse to develop technology.  The “war on terrorism” of the early 21st century focused resources on surveillance and information technologies such as the U.S. government’s “Total Information Awareness” system.  It was quickly renamed the “Terrorist Information Awareness” system to assure the public that, although everyone’s activities would be monitored, the system would not be used against the innocent.  The war on terrorism is also leading to renewed development of nuclear weapons—this time for routing terrorists out of deep bunkers or mountain caves—and for a missile defense shield.

While plan may result in focus, and hence efficiency, a diversity of efforts gives us the most chances at valuable accidents.  Silicon Valley, in California, is known for thousands of start-up companies, pursuing wildly varying ideas—often failing, but sometimes succeeding.  The successful companies are lauded, inspiring others to copy them.  The failures make for even more entertaining discussion and reading, helping others to learn from those companies’ mistakes.  Once someone tries something, others can observe whether it should be copied, improved upon, or avoided.

For example, before developing its nuclear power industry France observed what the U.S. had accomplished by “accident.”  A diversity of competing power companies in the U.S. developed many different nuclear plant designs.  The cost to certify and build these often-custom jobs was high, limiting their appeal.  The French government focused the approach, mass-producing just three types of power plants, and provided financing.  Now, nuclear plants supply approximately 75% of French electricity, but little more than 20% of U.S.

A relatively free market in which a diversity of people, organizations, or countries can pursue their own aims makes for many fruitful accidents.  Some opportunities may require focus of resources, so diversity and focus complement each other to develop technology out of accident and plan.  In our lives, we will observe environments of focus and of diversity.  History shows us how they differ as sources of technology.  Because our decisions influence which environments will emerge, it benefits us to know when to choose the efficiency of focus and when to choose the resilience of diversity.

 

 

Congress shall have power…
to promote the progress of science
and useful arts, by securing for
limited times to authors and inventors
the exclusive right to their respective
writings and discoveries.

– Article 1, Section 8 of the U.S. Constitution

Protection

You spend much of your life pursuing a dream of a better mousetrap.  Through research, trial, and error, you finally devise a design that makes all existing mousetraps appear primitive by comparison.  You show your design to those who might finance its manufacture, but to your horror, they make and sell the new mousetraps without your control and without sharing the profits.  It would be enough to make inventors give it all up and leave the world to get by without technological improvements, leaving the “pirates” with nothing to pirate.

Recognizing how fragile the investment in developing new technology can be, Venice created a patent system in 1474 and England did so in 1624.  Just as the patent system in the U.S. today, these granted a temporary monopoly on creation of a specified invention.

In theory, this encourages more development by protecting the investment.  And there is a further benefit of publicly documenting an invention.  It allows other innovators to learn from the design and make improvements significant enough that they warrant their own patents.  Either way, society reaps the reward of increased technology development.  In practice, patents can also inhibit development of technology.  A broad patent protects so much that many potential inventions would infringe upon it, discouraging anyone other than the patent holder from developing them (see sidebar Yertle the Turtle).

The incandescent light bulb that Thomas Edison patented in 1879 shows the value of patents in promoting innovation.  Edison tested 3000 different filaments before finding that carbonized cotton thread did not quickly burn out.  With a patent for the design he developed, he produced light bulbs.

Patents are crucial for technology with high information content.  The development of a new drug costs about a quarter of a billion dollars (including a share of the developments that failed), but very little to manufacture.  Without patent protection, no one would dare invest because a competitor could simply copy the research to avoid most of the cost and all of the risk.  Patents give some assurance that pharmaceutical companies will be able to charge much more for each pill than it costs them to make it, so they can reward the investment that went into its development and fund further development.

And, by and large, the patent system has worked.  Over the past few centuries, those areas with protection of intellectual property have been major sources of technology.  But to ensure that it does not stifle innovation, technology protection must evolve with technology.

 

 

Yertle the Turtle

In Dr. Seuss’ children’s book Yertle the Turtle, a turtle with delusions of grandeur believes that he is owner of all he can see.  As the dominant turtle, he compels the other turtles of the pond to stack themselves skyward to afford him a higher throne, broader view, and greater kingdom.  The moral comes when Yertle’s reach exceeds his grasp, his tower becomes unstable, and he comes tumbling down into the pond.  Covered with the muck from the bottom of the pond, discovers that he can see nothing else.

While Yertle had been able to maintain his claim to the pond by whatever means a dominant turtle remains dominant, with respect to the many lands beyond the pond, he confused seeing with owning.  Similarly, protection of intellectual property can confuse an event (identifying) with a process (developing).  Technology comes from an environment that protects its development, not merely its identification.

Patent attorney Dennis Fernandez came up with the idea of television viewers seeing each other and discussing a shared program.  Fernandez has a U.S. patent for televisions with cameras that would show both a program and a headshot of another viewer, who could be far away.  Although he is also an electrical engineer, he has no plans to actually make one or more of these devices.  He is satisfied owning the intellectual property rights so that anyone who does try to make one would have to pay him.  This patent protects someone who identifies something and discourages anyone who would develop it.

Two extreme cases of “Yertle the Turtle” patents are for the wheel (U.S. 5,707,114) and the genetic code for a person living in a remote area of Papua New Guinea (U.S. 5,397,696).  These are not “rewards” for investing in the development of new technology, but the equivalent of squatting on public land and claiming it as ones own.

In biotechnology, gene sequences have been covered under patents, even though they are not inventions and their function may not yet be known.  Some scientists complain that this departs from the intent of patents to promote innovation because it allows someone to “squat” on a gene sequence without developing scientific knowledge about its function.  This discourages others from developing that knowledge.

Without a commitment to develop, those who lay claim to an area of information, whether the design of the wheel or the DNA of a person, are just blocking innovation.  Even during the 19th century land rush in the U.S., when the government awarded 160 acres of land to whoever staked it out first, only those still working the land after five years received title.

_________________________

 

 

Innovation occurs for many reasons including greed,
ambition, conviction, happenstance, acts of nature,
mistakes, and desperation.  But one force above all
seems to facilitate the process.  The easier it is
to communicate, the faster change happens.

— James Burke

 

 

Easier communication leads to faster change because information fuels innovation.  In each of the sources of technology in this chapter we find the thread of information.

Other technology gives us greater physical capability to create new technology, but because it gives us examples of what is possible, suggesting combinations or analogous technologies, it also conveys information.  Inquisitive minds see not only the tool, but the information presented by the tool. The tool may be useful for a different task or, since this tool does, indeed, exist, then an even better tool could also exist.  The information in the tool points toward new applications and new technology

Improvements in technology were at first isolated and sporadic, but as populations grew denser, information about technology (and anything useful) reached more people.  Communication cascades as the information that one person shares with ten others is passed on to a hundred more in a chain reaction of development.  Information about how to create and use technology can even loop back to the original inventor, who may improve upon the improvements.  Today, advances in communication technology gives us many of the same benefits of dense populations.

Specialization, made possible by surplus, leads to diverse technology, each of which is surrounded by information. Carpenter, potter, metal smith, and alchemist developed their own vocabularies in the dawn of agriculture, much as scientists, technicians, engineers, and other specialists have in our current era.  We use this vocabulary to describe and design ever more sophisticated technology, which, like denser populations, is a process that feeds on itself by creating the need for more specialized information.

Both plan and accident are also about information.  The “plan” of planned innovation is simply information about objectives and resources to be used in developing a technology.  And accidents create information by revealing patterns in the Universe of which we were unaware—which sometimes lead to new technology.  Some environments are particularly suited to creating information through accidents (e.g. peacetime free market) and then focusing resources to exploit that information (e.g. war).  Penicillin was accidentally discovered in peacetime and then, by plan, manufactured in useful quantities during World War II.

Patent protection gives property rights over information about an invention and publicizes that information.  This makes the investment in innovation more attractive and can give other innovators new ideas.  As information becomes a greater component of technology (e.g. software and engineered drugs), technology becomes easier to copy and protecting it becomes that much more important.  But as information travels more quickly and easily around the world, protecting it becomes more difficult.  And protection can also inhibit innovation by assigning broad rights for discoveries rather than developments.

There are countless sources of technology, provided one is willing to delve deeply enough into the details.  But for our purposes, these answers are specific enough.  In the previous chapter on why we use technology, we made the point that our intent is not to create an exhaustive list but rather to clarify the question, to give it context.  For in our quest to understand and evaluate technology, it is the questions that have enduring value since new times and new technologies will bring new answers.  The answers in this chapter are a starting point, not an encyclopedia from which all answers will come.

One benefit of being able to understand and evaluate technology is that we know how to promote development of technology.  We may find ourselves in position to influence our environment, perhaps by voting on a law, leading an organization, or developing a school curriculum.  The patterns in this chapter, drawn from the millions of years we have had technology, can inform the decisions we make.


 

Chapter 4

How does Technology Work?

 

 

Hidden beneath the surface,
technology of all descriptions
works according to a few,
simple principles.

 

Early stone tools took skill to create.  Even with many hours of practice, modern anthropologists cannot match the skill Cro Magnon and Neanderthal had for chipping away at rocks to create sharp cutting edges.  Each strike of stone against stone chips away rock and changes the optimal angle of the next blow.  Of course, tens of thousands of years ago, we spent years learning how to make knives from stone because it was a matter of survival.

But as difficult as the technique may have been, understanding how the technology worked was simple.  It had no moving parts.  Little by little that has all changed.  Modern technology works in ways so complex that only teams of experts can understand them completely.

Take a car, for instance.  Specifically, consider the brakes.  Mechanical engineers understand the physical interaction of brake shoe and rotor or drum.  For antilock brakes, we need a computer engineer because computers monitor each tire for impending skid.  But few computer engineers understand all the layers within computers (software engineers may not understand the hardware and hardware engineers often divide their field into designing digital logic circuits, analog circuits, microchips, and more), so we would need a team of engineers to explain exactly how the brakes on a modern car work.

Even immortality would not allow an individual to grasp all of technology’s workings because specialists are inventing anew faster than anyone can keep up.  Of course, we can console ourselves that we don’t need to know how every technology works.  The beauty of specialization, made possible by agricultural surpluses, is that we can (and do) delegate tasks to specialists, such as engineers, technicians, and scientists.

But delegation is different from abdication, which is what we do when—knowing nothing of technology—we let the specialists make all the decisions.  We may have to rely on experts to process the details, but we can equip ourselves to comprehend their analyses, opinions, and predictions.  Threading through a sea of technical details, are simple patterns that connect wide varieties of technology, explaining aspects of how those technologies work in common sense terms.

There is a pleasure in discovering simple patterns just behind the apparent complexity of technology—and seeing how they have remained true over time.  In this chapter we examine seven that pop up in the functioning of a variety of technologies:

 

1.       All technologies rely on energy, from a horse pulling a plow to gasoline fueling a car.  Many convert energy from one form to another.

2.       Technology can be distributed into many small parts (e.g. power generation at home with solar or wind) or centralized (e.g. nuclear power plants or hydroelectric dams), sometimes alternating between the two as new inventions make one better than the other.

3.       Bicycles, nuclear power plants, airplanes, and missiles rely on feedback and correction, two key elements of control systems, which keep much technology focused on the goals we set.

4.       Information is the difference between a compact disc that comes in junk mail and one containing the human genome.  In the form of rules for solving a problem, it is an algorithm, which enables us to understand a technology’s behavior without having to understand its implementation.

5.       Repetition and layers are two ways that complex technology can be composed of simple building blocks (much as the repetition of 26 letters and the layering of words, sentences, paragraphs, and chapters compose this book).

6.       Emergent behavior is about the whole being more than the sum of the parts.  Just as an ant colony behaves very differently from any individual ant, so, too, do many complex technological systems behave differently from any of their components.

 

Since these patterns have endured over time, we may find them threading through future technology, however strange and foreign it may appear.  We still need experts to design, build, maintain, and explain.  But, unless we choose to cede control of our future to those experts, we need a basis from which to understand their explanations and to form our own evaluations.  That basis starts here.

 

Energy:  the muscle behind technology

Here’s a trick that won’t work.  Wire a solar cell (which generates electricity when exposed to light) to a light bulb that shines on it.  This system creates its own energy, running the light from its own light.  A similar trick was proposed in the waterwheel invention of Robert Fludd, a 17th century London doctor.  In theory, water flowing over the wheel powered a pump that sent all the water back upstream so it could again turn the wheel.  If you could tap a bit of the energy from the solar cell or Fludd’s waterwheel to do work—perhaps run a stereo or grind wheat—then you’d perform work for free:  no external source of energy necessary.

These two contraptions do not—and cannot—work.  Commonly referred to as perpetual motion machines, the U.S. Patent Office will not issue patents to them, though inventors still try.  The flaw in the solar cell and bulb system—just as in any perpetual motion machine—is that some energy is always lost:  By the numbers:

 

     15%   of light striking solar cell is converted into electricity (85% reflects or becomes heat)

x  99%   of the electricity going through wires is not dissipated as heat

x  50%   of the electricity flowing through the light bulb becomes light (50% becomes heat)

x  25%   of that light may actually strike the solar cell  (75% light shines elsewhere)

   =  2%   of the light that strikes the solar cell would become light striking the solar cell again

Any energy we might initially endow the system with would quickly disperse in the form of heat and of light shining somewhere other than onto the solar cell.  And even if every part of the system could be 100% efficient, tapping any energy to do work would quickly exhaust the system’s internal supply.  Without a continuous external source of energy, our contraption would just sit in the dark.

Technology—just as everything else in this Universe—needs energy to do work.  So, no matter how complicated and confusing a technology may be, here is something we do know about it:  somewhere, there is a source of energy.

Of course, energy sources can be inconspicuous.  For example, most wristwatches can operate for years on a tiny battery hidden inside, but eventually the chemical energy stored in the battery is exhausted and the watch stops.  Some watches even derive energy from the motion of the wearer’s arm or the heat differential between the arm and ambient air.  No battery, but still an external source of energy fueled by the food eaten by the person wearing the watch…and not the magic of perpetual motion.

Food was the source of energy for the first technology humans developed, and is still the only source for the tools animals use.  Energy in food is transformed to energy in muscles, which operate sticks, stones, hammers, scalpels, bicycles, and other manual technology.  The energy in food comes from nuclear fusion in the sun, which creates light, which creates plants through photosynthesis, which feed animals, which feed other animals.

Though dogs may have started living with humans 14,000 years ago, they were not the first animals we tapped to power technology.  Domesticated reindeer started pulling sledges in Northern Europe about 7000 years ago, 1000 years before we domesticated horses and 3000 years before we got them to pull vehicles.  But even with domestication of animals, the only energy source for technology was muscle powered by food.

That all changed with the sails on ships, which, more than 5500 years ago, harnessed wind.  Sails are limited to propulsion, but waterwheels, invented about 2100 years ago, used moving water to grind wheat, corn, and sugar cane, work bellows to make fires hot, and pound hammers onto rocks and metals.  Windmills, first created around 1600 years ago, were applied to many of the same tasks.  The sun, again, is the source of these energies, whether by differentially heating the atmosphere to create wind or by evaporating water to create rain and flowing water.

Fire, long useful for cooking food and staying warm, first became an energy source for technology around 100 AD.   Greek temples used steam turbines to open and close doors, as if willed by the gods.  Whether limited by materials, imagination, or motivation (a huge slave population already provided manual labor), the Greeks did not develop the steam turbine into a practical energy source.  Over time, the steam turbine was forgotten, but, by the year 1700, it was reinvented and went beyond novelty tricks to power steamboats, steam trains, and—for a short time—steam cars, too.  Burning wood releases the energy captured from sunlight through photosynthesis.

So does burning fossil fuel.  Coal, oil, diesel, and gasoline were plants, dinosaurs, and other animals, before millions of years of high-heat, high-pressure subterranean processing.  In 1860 the internal combustion engine tapped fossil fuel’s explosive energy.  Now the burning of diesel and gasoline propels most of our transportation and the burning of coal provides more than half of U.S. electricity.  While not renewable, fossil fuels originally got their energy from the same source many renewables do:  sunlight and photosynthesis.

Propelling arrows with gunpowder, 13th century China tapped chemical energy.  While gunpowder has been wildly successful in creating explosions for guns and mining, several attempts at using it in internal combustion engines failed.  Another source of chemical energy has proven an excellent source of energy for technology.  In 1800, the invention of electrical batteries allowed us to convert the chemical potential in various metals into electric current.  Some evidence suggests that two millennia ago chemical batteries were used in Iraq, but it remains speculation.  Reversible chemical processes are used in rechargeable batteries, especially useful in laptop computers and cellular phones.

In 1943 we harnessed nuclear energy.  Nuclear fission (splitting large radioactive atoms into smaller ones) is used as both an energy source (nuclear plants) and a weapon (atomic bomb).  The sun, too, uses nuclear energy, but in a fusion process (combining small atoms into larger).  The only Earth-based fusion reactions producing significant energy have been uncontrolled, and so used only as weapons:  hydrogen bombs.  Although we have yet to control a fusion reaction to produce energy, for billions of years the fusion reaction located a safe 93,000,000 miles away from Earth has been source to most of the forms of energy we discuss in this section.

Another source not leading right back to the sun is geothermal energy.  Heat deep within the earth from both gravitational pressure and radioactivity creates a temperature differential, which can be harnessed to produce energy.  The classic view of geothermal energy is of steam shooting from the ground.  Like steam from other sources (e.g. wood or coal fires or nuclear fission reactions), this can drive turbines, which generate electricity.

Back to the sun.  In 1954, 115 years after the principle of converting sunlight directly into electricity was discovered, photovoltaic cells, or “solar cells,” made it practical.  The sun’s light causes electrons to move, which is electricity.  Solar cells are important sources of electricity on earth-orbiting satellites, the International Space Station, handheld calculators, and some buildings and homes.  Centralized generating plants are few and small compared to fossil fuel or nuclear plants.

 

 

How Solar Cells Work

Solar cells are semiconductors, similar to integrated circuits.  These silicon products are descendants of the germanium rectifier that made possible 19th century “crystal” radios (which we touched on in the chapter What is Technology?)  A rectifier allows electricity, the movement of electrons, to flow in only one direction.  This is crucial in solar cells because when a photon of light liberates an electron in the absence of a rectifier, that electron can fall right back into place, releasing the energy it absorbed from the photon as another photon.  Light goes into the material and light comes out.

But if a liberated electron is caught on the wrong side of a rectifier, it cannot return to the hole it left.  If the easiest way around the rectifier and back to the hole it left is through wires, light bulbs, motors, or televisions, then that electron will go that way, driving our appliances.  The more photons, the greater the imbalance of electrons on one side of the rectifier and holes on the other, the more force with which those electrons will flow through an electrical appliance to restore the balance.

Almost all the electricity we consume comes, not from these semiconductors, but from moving magnetic fields.  It’s a physical law that electrons move in a conductor (e.g. wire) when a magnetic field moves nearby.  The turbines spun by moving water in hydroelectric dams or by steam in coal, oil, or nuclear plants spin magnets near coils of wire, generating electricity.  In the year 2001, the U.S. generated 494 million kilowatt hours from solar, representing just 0.013% of the electricity used that year.  Even some of the solar power came, not from solar cells, but from moving magnetic fields, with sunlight heating water into steam to drive turbines.

 

In the use of fossil fuels we see some remarkable transformations of energy.  In the sun’s fusion reaction, matter is converted to energy (according to Einstein’s famous e=mc2 equation) at the rate of 4,600,000 tons per second.  Of the energy released from the sun, only one part in two billion reaches the earth.  Millions of years ago, some of that was consumed by photosynthesis in plants to create sugar molecules.  These molecules fueled growth of those plants, some of which were consumed by animals.  On dying, some of these plants and animals did not decay.  Instead, deprived of oxygen, they began a subterranean process involving pressure and heat that, over millions of years, produced fossil fuels.  These carbon compounds, dense with potential energy, combust to produce heat to produce motion.

In combustion engines fossil fuels drives wheels of vehicles.  A 2003 study calculated that 16 acres of wheat (and millions of years) would be required to make a gallon of gasoline, about enough to travel 20 to 40 miles in a typical car.  In power plants fossil fuels produce electricity.  The process of generating electricity from sunlight by way of coal, can be broken into 11 steps:

 

1.       Sun shines on ancient plants, fueling photosynthesis

2.       Photosynthesis creates sugar molecules, building blocks for biological growth

3.       Plants and animals die but, deprived of oxygen, are prevented from decaying

4.       Remains sink deep into the earth to be heated and crushed for millions of years

5.       Miners dig coal from the earth

6.       Trains transport coal to power plants

7.       Coal is crushed and refined

8.       Coal dust is combusted to heat water

9.       Steam expands and pushes turbine

10.   Turbine spins with magnets attached

11.   Moving magnetic field induces electric current in nearby coils of wire

 

From there, electricity drives technology, which continues to transform energy.  Light bulbs turn electricity to light.  Motors convert it into motion (which in a refrigerator drives a pump that creates a temperature differential).  Heaters and electric ovens convert it to heat.  Microwave ovens convert it to microwaves, which interact with foods and beverages to create heat.  Televisions and stereos convert it into information-laden light and sound (another form of energy).

The history of technology includes many of these energy transformations.  The first diesel train locomotives, like the steam locomotives that they replaced, had linkages to each of the drive wheels.  Success of the diesel locomotive came with a counter-intuitive idea:  use the diesel engine to generate electricity, which is wired to an electric motor on each of the drive wheels.  This sequence of fossil fuel to mechanical energy to electricity and back to mechanical energy eliminated heavy and inefficient mechanical linkages or transmissions.  This back-and-forth conversion is so efficient and economical that the diesel-electric engine rapidly replaced the pure diesel, and is commonly referred to simply as the “diesel engine.”

Submarines benefited from the same diesel-electric combination.  Until the snorkel, invented by a Dutch officer in 1933, submarines could not use their diesel engines while submerged because combustion requires large quantities of oxygen.  The snorkel let them draw in oxygen while running just below the surface, hidden from radar.  However, diving to safety deep below the surface still meant running on electricity.  Bulky transmissions to connect either the diesel engine or electric motor to the propeller were eliminated by connecting the diesel engine to a generator, which both recharged the batteries and powered the electric motor now connected directly to the propeller.

This approach persists in nuclear submarines, whose generators derive heat from a fission reaction, which does not need oxygen (nor does it produce exhaust, which would reveal it’s submerged location to hunting ships).  The heat turns water into steam to spin a turbine that turns a generator that produces electricity.  As in the diesel-electric submarines, electric motors turn the propeller.  Unlike the diesel-electric submarine, the nuclear submarine can run completely submerged until food runs out, since it produces both oxygen and drinking water from seawater.  Land-based nuclear power plants take an approach similar to the submarines.  Heat from the fission reaction turns water to steam, which spins a turbine to generate electricity.

“Fuel cells” in hydrogen vehicles demonstrate energy transformation by combining hydrogen from storage tanks with oxygen from the air to create water and electricity, which runs electric motors attached to the wheels.  These “zero-emission” buses and cars release pure water as exhaust.  It stands to reason that if energy is released by combining hydrogen and oxygen into water, then it must take energy to separate them back out.  Otherwise, we might just have the perpetual motion machine science maintains is impossible.  Various technologies are under development to perform this separation of hydrogen, but the simplest is running electricity through water.  So it is possible that a zero-emission vehicle would run on electricity from hydrogen separated from water by electricity generated by burning coal.  This system as a whole is, clearly, not zero-emission.  But it does illustrate our main point: every technology requires energy and many technologies transform it.

When energy is as ready as switching on a light or pressing a gas pedal it is easy to ignore.  Imagine being stripped of advanced technology for generating, converting, and consuming energy.  Thousands of people begin foraging for sticks shortly after dawn.  Before dusk, they shoulder bundles of wood for miles, selling them in towns to buy food for another day.  Burning wood cooks food, heats homes, and drives primitive industry.  To see this in the 21st century visit Ethiopia, where 90% of consumed energy comes from biomass: wood, charcoal, and cow dung.  The capital, Addis Ababa, draws 15,000 “women fuelwood carriers,” walking up to 10 miles with loads of 70 to 100 pounds, which sell for as much as 70 cents.  The fuel carriers are acutely aware of where energy comes from.

That awareness would be useful for those who can casually tap hundreds of horsepower in their cars, similar amounts in their homes (for heating, lighting, appliances, and entertainment), and more in elevators, airplanes, and climate-controlled businesses.  Choices, amplified by and dependent upon technology, are informed by such awareness.  The amount of energy we control has increased dramatically over the millennia and it appears that the trend will continue.  Far in the future, when technology may appear quite unlike anything we have today, understanding and evaluating it will still depend on grasping the source of its energy.

 

 

 

 

Organization Part 1: Centralized vs. Distributed

Another characteristic that will persist into the future is the organization of technology systems as either centralized or distributed.  The movie Back to the Future depicted a “Mr. Fusion” reactor mounted in a car, energy production distributed to the point of use.  But there are good reasons that current nuclear energy production is now centralized:  nuclear power plants are complex, expensive, dangerous, and require expert maintenance.  Other technologies, including solar cells and windmills, can be either centralized or distributed, with advantages for each.  History shows many technologies swinging from centralized organization to distributed and back, influenced by new capabilities, concerns, or requirements.

The “monster in the basement” was what Mrs. Vanderbilt called the steam engine that ran an electrical generator to power the new electric lights in her house.  The fabulously wealthy Vanderbilt family had replaced their candles and gaslights shortly after Edison’s 1879 invention of the incandescent light, but before he developed a centralized power system.  The monster in the basement was an example of a distributed power system and, with visions of high-pressure steam exploding through the floorboards, Mrs. Vanderbilt came close to throwing it out.

Until the 19th century, many factories were built next to rivers so that waterwheels could provide power to grind wheat, spin cloth, or cut wood.  That distributed generation has become centralized, with most power today generated by large oil, coal, gas, nuclear, or hydroelectric plants.

 

Distributed 11th vs. Centralized 21st

In 11th century England, 5624 water mills provided most of the non-muscle power to between 1.25 to 2 million people.  1000 years later, in the 21st century United Kingdom, 177 power stations generate electricity to support almost 60,000,000 people.  Another way to look at it:  about 300 people were supported by each water mill in the 11th century and many more than 300,000 people by each power plant now.  So there has been more than a 1000 to 1 move toward centralization.

Some of the factors we did not consider tend to offset each other.  Unlike 11th century inhabitants, 21st century people use more than electricity (e.g. fossil fuel for cars, trucks, airplanes, and trains).  But we can safely say that 21st century people consume more electrical energy than 11th century people consumed of any kind of energy.  A millennium ago, there were no electric lights, computers, refrigerators, microwave ovens, or televisions on which to spend energy.

So, even though each 11th century inhabitant consumed less energy, they had 1000 times as many power plants per person than we do now.  How can this be?  Our few, centralized plants are far larger and more powerful than the many distributed plants they had.

 

 

A friend of the author grew up on a small farm in Iowa, where a windmill provided energy for lights and a radio.  In the 1940s, power lines marched past, erected by the new Rural Electrification Administration (REA).  The price for connection to the reliable and centralized power?  Dismantling the windmill.  The REA wanted no competition from that old farm.  Centralized energy production was eradicating distributed, but, like a pendulum, the trend can also swing back.

In recent decades more people have started generating their own electricity with windmills in their yards or solar cells on their roofs.  This distributed approach can be attractive for remote locations, which are often expensive to connect to the electrical distribution grid.

This pattern of centralized vs. distributed organization also shows up in technologies other than energy generation.  Until the 1940s, residents of Key West, Florida, used cisterns to catch rainwater, a highly distributed water system.  Then the U.S. Army built a central reverse-osmosis desalination plant to replace cisterns on the one-mile by four mile island off the Florida coast.  By order of the U.S. Army, all cisterns were filled with salt water, covered over, or somehow disabled to prevent any diseases harbored by standing water.

Surprisingly low water pressure from the new system spurred nocturnal work habits among residents, who often washed laundry at 2:00 AM when water pressure was a little higher.  Even at low pressure, the desalination process was expensive, so it was replaced with a 16-inch pipe that brought water from the mainland.  Vulnerable to storms, the new pipe inspired little confidence and cisterns returned, cleverly disguised (one as a fireplace) to avoid the U.S. Army’s unwanted assistance. 

What factors cause technology to organize in a centralized or distributed way?  One is technological capability.  Because mid-20th century computers were huge, expensive, and required expert maintenance (and security and air conditioning), they were centralized.  Late-20th century microprocessors were tiny, cheap, and ran without maintenance, so they were distributed (to microwave ovens, VCRs, and the doorknobs of hotel rooms)…except when other factors came into play.  Some computing problems were so large (tracking every purchase at Wal-Mart) that they needed many coordinated microprocessors, so these were centralized…until the Internet and new software allowed some of these tasks to be distributed (e.g. calculation of protein folding patterns or the search for prime numbers).

Environmental conservation and volatile energy supply are two other factors that affect the organization of technology.   Environmental conservation has encouraged some Californians to invest in solar power for their homes.  In 2001, unpredictable energy supplies in California (prices jumped as high as fifty times previous levels and blackouts rolled around the state) prompted another movement towards decentralized solar energy production.

By contrast, convenience and simplicity of distribution have pushed energy technology toward centralization.  Distributing electricity is much easier than solid fuels, such as coal.  Centralized coal-fired electrical generation saves us from having coal dropped off at each of our homes, as was once done for heating purposes.  If we wished to use it to generate electricity, we would each have to operate and maintain our own little power plant.  This would be much harder than training a dedicated and expert staff operating in shifts around the clock to operate and maintain centralized power plants.

Efficiency is another advantage of centralization, given current energy technology.  Not only are larger plants more efficient, but they can run continuously because someone is always using power.  It would be a waste to run our own plant when we are not consuming electricity or to shut it down and start it up each time we do.  When large plants have to shut down for maintenance, starting them back up is a lengthy and expensive process, so it happens as infrequently as possible.

But just when we are sure that the choice between centralized and distributed organization is obvious, technology can change and so can the best choice.  Thomas Edison’s first electrical power plant was in downtown New York City.  A high density of consumers was crucial because Edison used direct current (DC), which does not travel well.  Over long distances, most of its energy is lost to heat.  Clearly, distributed power production appeared to be the future of electricity.  And then technology changed.

Nikola Tesla invented and developed technology for alternating current, which can travel well over long distances.  Today, alternating current dominates (alternating 60 times per second, or 60 Hertz in the U.S.) and towers hundreds of feet tall march across rural areas, carrying extremely high voltage from centralized power plants to far away consumers.  Since then, the long-term trend has been toward centralization, as technological advances allowed larger, more efficient fossil fuel and nuclear plants.

New technology may well follow trends from centralized to distributed and back again.  Having seen this pattern in energy production, water distribution, and computers, it may be easier to identify elsewhere.  Centralized and distributed systems have been tested over thousands of years in technology and over millions of years in biologic systems—plants and animals.  Studying the tradeoffs made and the environments in which each approach has been most successful may save us from making costly mistakes.

 

 

Control: like riding a bicycle

A completely different pattern in how technology works concerns control.  What does “control” mean in technology?  Bicycling explains.

Bicycles are not stable.  Rushing along on just two wheels, we are constantly falling to the left or the right, but once we’ve mastered “balance” we subtly turn the handlebars in the direction of the fall and start the process going to the other side.  Don’t believe it?  Try holding the handlebars dead straight or—easier to do—drop both wheels into a narrow channel just wide enough for the tires, like the rut between railroad tracks and a crossing street.  Wear a helmet, gloves, and appropriate body armor—and do not try this where trains are running.

OK, for liability reasons, please do not try this at all.

To ride we need two things:  feedback of which way we are starting to lean over and correction with the handlebars.  This section is about how control (feedback and correction) is a pattern common to many technologies.

For most of history, control has been provided by the human component in the system, such as the rider on the bicycle.  Even in the striking of one stone against another to create a sharp edge there is an element of control.  After each strike, the human observes how the stone has chipped (feedback) and adjusts the next strike accordingly (correction).  One of the oldest technologies to appear to control itself without a human in the control loop is a curious novelty from ancient China.

Living in China around 300 AD, you might have been lucky enough to see the strangest carriage in the world.  A small statue pivoted on top of the carriage to point south, no matter which way the carriage turned.  How did it work?  The magic was performed by gearing very much like the differential in a car’s transmission, which allows the left and right wheels of a car to turn at different speeds.  And this is critical since as a car turns right, for instance, the left wheels have to travel farther and spin faster than the right wheels.  In the case of the Chinese carriage, the different speed of the left and right wheels determined how far to rotate the statue.

Whatever direction the statue started out pointing (presumably south), the mechanism would keep it pointing… assuming that the wheels did not slip on the ground and all the machinery were perfectly precise.  In practice, imperfection and slipping would be present.  Small errors would accumulate and eventually, the statue could be pointing any direction, so it failed to become anything more than a novelty. 

The South Pointing Carriage fails to incorporate control because it lacks feedback when it wanders from pointing south and it lacks correction to reposition it to point south.  What the ancient Chinese needed was a control system that incorporated something they had already invented:  the magnetic compass.  And something invented over 1600 years later would have been useful for monitoring the compass and signaling for appropriate corrections:  the microprocessor.  Its invention in 1971 has done more to remove humans from the control loop than anything else in history.

Microprocessors can monitor sensors, follow algorithms, and operate motors, lights, and other electric devices.  With modern technology, the inventors of the South Pointing Carriage could have monitored a magnetic compass with a microprocessor.  They would have programmed it with an algorithm to run an electric motor to rotate the statue until pointing south.

This ability to monitor a condition, follow an algorithm, and operate devices makes microprocessors good at the feedback and correction necessary for control.   So good, in fact, that they have invaded many technologies, such as thermostats, antilock brakes, microwave ovens, and video games.  This is beginning to remove humans from the loop, though we are still involved in setting the parameters, such as the temperature that the thermostat should try to maintain.  As technology becomes more sophisticated, requiring finer and quicker control, and computers become ever less expensive, we may find human control farther and farther removed.

In nuclear power plants, airplane cockpits, and intercontinental ballistic missile launch sites the stakes for control mistakes are high.  So how should they be controlled?  Humans are more fallible than technology, which shows few tendencies to sleep, forget, drink, gamble, or get into compromising situations subject to blackmail.  On the other hand, humans have a much greater contextual awareness than does technology, which does not yet read the morning paper or know about circumstances that might impinge on a decision.

For now we compromise by keeping both technology and humans in the control loop of critical systems.  Technology does the repetitive (check the temperature 10 times per second, 24 hours a day) and humans make the really big decisions (flood the reactor core).

A twist in history gives us an example of human control being inadequate.  Mid-20th century, a new airplane design called the “flying wing” was introduced.  The military aircraft was all wing and no fuselage, a very efficient shape.  Unfortunately, it is also an unstable shape and human pilots had difficulty controlling it.  So much difficulty that the design was abandoned.

Near the end of the 20th century, the designers of the stealth bomber independently came up with the same flying wing shape.  At that time, computers could be used on-board to operate all of the aerodynamic control surfaces, keeping the plane stable and flying in the direction the pilot indicated.  Like antilock brakes, the on-board computers were able to make many comparisons and corrections each second.  There is an interesting coincidence between the stealth bomber and the old flying wing.  The stealth designers used computer simulations to determine the optimal dimensions of the modern plane.  They came up with a wingspan of 172 feet.  Later, when they realized how similar their new plane and the flying wing appeared, they looked up the old specifications.  The flying wing had a wingspan of 172 feet.

One way to look at this: we relinquished control of one technology (airplanes) to another (computers).  Essential to the feedback and correction in a control system is information.  Correction is based on feedback, which is information.  Modern airplanes have replaced mechanical linkages between the pilots’ controls and aerodynamic control surfaces (e.g. flaps, ailerons, elevators) with wires.  Called “fly by wire” systems, it is clear that information in the form of electrical signals is at the heart of the system.  The stealth bomber took this farther by letting an information-processing computer operate the controls.  Information is at the heart of many technologies.

 

 

If there were something like
a guidebook for living creatures,
I think the first line would read like
a biblical commandment:
Make thy information larger.

— Werner Loewenstein

Information: algorithms

Information controls how technology works.  It has since long before the stealth bomber.  Silk looms in 18th century France stored information about the patterns to be woven as pegs on a cylinder, holes on strips of paper, and holes in cards (invented by Bouchon, Vaucanson, and Jacquard, respectively).  That information controlled which colored thread of silk the loom wove through the fabric at each of thousands of steps.  This automated the fabrication of complex designs that consumers wanted in their silk clothing, tablecloths, and wall hangings, which could be ruined by the careless mistake of a fatigued worker.

Looms inspired computers, which initially stored information with mechanical gears, paper tape, punch cards before such modern developments as the optical compact disc (CD).  The information on a compact disc, for instance, can control a computer by telling it what sounds to play (if a music CD and the computer has an application that plays sound files) or what instructions to execute (if an application CD).  This example shows two different kinds of information, which computer scientists term “data” and “program.”

The peg or holes in silk looms represented data, as did the information (“feedback”) from sensors on the stealth bomber.  Program information for the looms was in the techniques of the loom operators and was shared verbally (from master to apprentice).  Program information for the stealth bombers was in computer applications and was shared magnetically (from development computer to on-board computer).  While programs may be complicated—computer programs may have millions of lines of code—they are based on algorithms, which are defined as the rules for solving a problem.  For instance, a basic thermostat controlling a home heater, whose computer program would be completely unintelligible to most of us, follows this simple algorithm:

 

1.       Measure current temperature (data)

2.       Compare it to the desired level set on the dial (data from the human operator)

3.       If current temperature is less than set temperature, then turn on furnace; otherwise, turn it off.

4.       Loop back to step 1

 

Something common in bathrooms gives us another example.  The Sonicare ™ electric toothbrush, which promises to clean and whiten your teeth with high-speed vibration of the bristles, follows this algorithm:

 

1.       If the button is pressed, run for two minutes before turning off.

2.       If the button is pressed while running, turn off.

3.       If off for less than 45 seconds (you just wanted to add more toothpaste, for instance) when the button is pressed, run for whatever was left of the original two minutes.

4.       If off for more than 45 seconds or placed back in the charger before the button is pressed, run for a full two minutes.

 

We find a slightly more complicated, but also more familiar, algorithm in the cookbook Laurel’s Kitchen:  A Handbook for Vegetarian Cooking and Nutrition.  Their recipe for cornbread:

 

Ingredients

2 cups cornmeal

½ cup wheat germ

1 teaspoon salt

½ teaspoon baking soda

1 teaspoon baking powder

1 tablespoon brown sugar

1 large egg, beaten

1 tablespoon oil

2 cups buttermilk

Instructions

Preheat oven to 425.

In a large bowl stir together dry ingredients.

In another bowl, mix the wet ingredients.

Combine the two just until they are well mixed.

Turn into an 8” x 8” baking pan, well greased.

Bake for 20 to 25 minutes.

 

Notice that the recipe does not specify who is following the rules.  It could be a woman, a boy, a robot, or some brilliantly coordinated insects.  The technical phrase for this flexibility is “substrate independence,” but we can also view it as separating function from form, procedure from implementation, or information from matter.  Because of the algorithm’s substrate independence, understanding the behavior of a technology can often transcend the details of how that technology is implemented.  Even when we comprehend little else about a technology, algorithms can allow us to predict its behavior.

Computers have been built on substrates of mechanical gears, magnetic relays, vacuum tubes, discrete transistors, integrated circuits of millions of transistors, test tubes of DNA, and even Tinker Toys™, the children’s toy of wooden blocks, pulleys, strings, and sticks.  Daniel Hillis, a computer scientist who designed one of the most advanced computers of the 20th century, created a Tinker Toy computer to play tic-tac-toe.  It had neither the power nor the speed of the commercial products he designed, but it operated on the same algorithms.  He summarized this substrate independence for computers:  “One of the most remarkable things about computers is that their essential nature transcends technology.”  One of the most remarkable things about technology is that its essential nature transcends matter.  Information in the form of algorithms is why.

We can apply this principle to understanding nanobots, microscopic robots built on the nanometer scale (each part of the robot might be just billionths of a meter or a few atoms in a row).  Because nanobots have not been invented yet, we can only speculate about how they would work.  Some call them science fiction; others predict they are a likely consequence of the nanotechnology we are already developing (which is still primitive, limited to such inventions as sunscreen with nanometer scale titanium dioxide particles to better block ultraviolet rays and Eddie Bauer pants that won’t stain even with red wine).  The consequences of nanobots are interesting enough and serious enough for us to start thinking about them just in case they are possible.

Nanobots could manipulate matter at the nanometer scale, which means they could arrange and rearrange atoms.  Scavenging atoms and molecules from their surroundings, they could make copies of themselves.  The Sorcerer’s Apprentice segment of the movie Fantasia, when Mickey Mouse lets replicating brooms get out of control suggests what could happen with nanobots.  Once a nanobot makes a copy of itself, both it and the copy make copies.  Then those four each make copies.  Even if making a copy took a day, there would be one billion of them at the end of a month and 1153 quadrillion at the end of a second month.  What’s to stop this?

Perhaps an algorithm.  The self-replicating nanobot could include a counter, which would start at “10” in the original nanobot.  Then, when a copy is made, the counter decrements to “9” and the copy has its counter set to “9”, too.  The four nanobots in the next generation each have counter values of “8”.  When the counter reaches “0”, the nanobots stop multiplying.  This would result in 10 generations or 1024 nanobots.  If this is not enough for our purpose, we could program the counter to start at a larger number.  Even without understanding the substrate or the implementation of nanobots, we can understand this algorithm.

This is not a new technique.  Nature already came up with this technique to prevent rampant replication.  Our algorithm is similar to that in some cells, which divide until their counter—a steadily shortening tail of telomeres—runs down.  Now we get to apply something we know about in one field—medicine—to technology.  This allows us to anticipate what might happen if the counter fails.  Cancer is a disease that defeats the telomeres countdown, mutating the cell to allow unlimited reproduction.  Could something like cancer afflict nanobots?  Could a mutation in a self-replicating nanobot change its algorithm and allow it to replicate forever?  What would stop it from converting all material on the surface of the earth into copies of itself?  Evaluating nanotechnology while nanobots are still science fiction would seem a good idea.

 

Free Matter and Valuable Information

Fabricating from the atoms up—perhaps even scavenging the carbon atoms with which we have polluted our atmosphere through smokestacks and vehicle tailpipes—with nanotechnology suggests that information may become more valuable than materials.  To explain, we start with a couple technologies that already exist.

The value of a compact disc (CD) depends almost entirely on something invisible to the naked eye.  Suppose you receive one in the mail.  How much is it worth?  If it is a new computer application, it could be hundreds of dollars.  If a music CD, perhaps $10 to $15.  If a pitch for Internet service (and you are already satisfied in that way), it is worthless—at best it would make a shiny drink coaster on your coffee table.  If it contains the human genome, then it cost hundreds of millions to create, but since it is freely downloadable, the CD is not worth much.  If you could only take that CD back a few years in time, you could sell it to those about to spend all that money on the human genome project!  In short, the value of the CD you hold in your hand has much more to do with the information on it than in the materials that compose it.

New digitally controlled chemical molding systems may soon download information on the design of a table or bookshelf and then have the item pop out of an automated factory.  Current designs of this “automated factory in a box” already produce hulls for boats, but could produce almost anything that can be molded out of plastic.  Soon it may be possible to drop these from airplanes into remote areas suffering from natural or human disasters.  They could produce sections of irrigation pipeline in the morning, download new plans, and produce containers to store food in the afternoon.

In the future, nanotechnology may remove the restriction that the product be molded out of plastic