I'm the President of the Friends of the El Cerrito Library. By default that makes me Chairman of the Board, too.
Not that I'm in charge of the organization. Far from it. Most of the Board members are little old ladies in tennis shoes. Among them are a retired analytical chemist, a retired nuclear physicist, several retired editors and..well, you get the drift.
They're the ones who run the show. I just cooperate with the inevitable.
The members of the Friends Board, like the members of most community-based public benefit organizations's Boards, are almost exclusively retired people. Chances are that the same is true of the Friends Board in your community. (You did know that every public library in the U.S. has a Friends organization associated with it, didn't you?)
That's because retired folks have the time to serve as Board members. There aren't many (relatively) young whippersnappers like me who're willing to take the time out from their busy careers to help stage fundraisers, staff booksales, write letters to large donors and so forth, but, since I'm The Laziest Man On Earth, I make the time. And, as a consequence, I've gotten to know a lot of older people--many of them in their 80's and 90's.
Now, you're reading this in January, 1999, but I'm writing it in the wee hours of November 1, 1998. As I write, John Glenn is still enjoying his second ride into space and serving as a general source of inspiration for old folks around the globe.
That's set me thinking.
I was born before the age of space flight. I was eight years old when Glenn took his first trip into orbit, sixteen when Neil Armstrong took "one small step" from the Eagle Lunar Module onto the surface of our Moon.
Just two months after Apollo 11's flight took place, computer scientists at UCLA brought up the first node on the Defense Department-funded packet-switched network known as the ARPANET. By December, Stanford University, UC Santa Barbara and the University of Utah were hooked in via land lines and what would eventually become the Internet began to evolve.
Nearly thirty years has passed since then. I look at the changes that those three decades have wrought and I can't help but think about my friends in the Friends.
21st Century Schizoid Man
You see, I'm a science fiction fan. I've been a science fiction fan since I was six years old. Perforce, I am never surprised by mere technical advances--and rarely by cultural ones. For my friends in the Friends, though, it's a different story.
They were born before the advent of antibiotics--perhaps the single most important discovery in human history. They grew up in a world of iceboxes instead of refrigerators, of radio rather than television, of rail lines in place of airlines. In their salad days, a "computer" was a human who did mathematical drudge work for a living. The only electronic networks belonged to telephone and power companies--and all telephone calls required operators. There were no beepers, no cell phones, no microwave ovens--and not so much as a hint that someday there would be such things as personal computers, much less palmtops.
I usually devote my January column to prophecies about what will happen to the Internet over the course of the next year. I've had good years--where most of my prognostications panned out--and bad ones where I was wrong almost across the board.
As the physicist Niels Bohr observed, "Prediction is extremely difficult. Especially about the future."
Not that I let that stop me.
Properly speaking, since there was no Year Zero, the 21st Century won't start until 12:01 AM, January 1, 2001. On the other hand, the big count-down party starts on Friday, December 31, 1999. As you leave work that afternoon, you'll tell your co-workers, "See you next century!" And, darn it, it's not the Year 2001 Problem that will dominate much of the next 12 months.
Hey, we work in the computer industry. For us, zero is a number. So, you'll be seeing a lot of forecasting about what the new century will bring come December or so.
I've always been ahead of my time. Back when it started in LAN Times magazine, this was the first regular column about Internet technologies and policies in the computer trade press.
And I mean ever.
Thus, it's only fitting, right and natural that I get the jump on all the other latter-day Nostradamus wanna-bes and say my sooth about the next 100 years now, before the crowd shows up.
Who Knows What Tomorrow May Bring
I've found that the easiest predictions to get right are those about the very near and the very distant future. The bit in between is much messier and harder to get a handle on.
I think it's safe to project that, if and when HDTV takes off, the VHF spectrum it frees up will quickly be soaked up by PCS-like devices (for which I'm now officially claiming coinage of the term "PCS II"). Since the FCC has mandated conversion of all existing TV broadcasts to HDTV by 2007, I also think it's reasonably safe to assume that the final switch-over will happen no later than, say 2012.
I say that, because I'm reasonably sure that HDTV won't happen on the FCC's present schedule. Essentially, there's a chicken-and-egg problem standing in the way of its broad acceptance. TV broadcasters are, not unreasonably, reluctant to expend the very considerable sums of money necessary to convert their operations over to native HDTV in the absence of an established audience. And the audience doesn't yet exist because HDTV sets cost an arm, a leg and your first-born child.
There are other potential roadblocks in HDTV's path, as well. The FCC gave all existing VHF broadcasters HDTV frequencies to replace the VHF spectrum they're theoretically supposed to hand over in 2007. Many of those broadcasters have opted to lease out their HDTV frequencies to PCS cell phone and other communications companies. They are going to resist giving up that lucrative (and free) cash flow, just because 2007 rolls around and their lease holders are going to throw a fit.
It gets still more complicated when you look at the task of shoehorning HDTV into cable TV system. HDTV demands an enormous amount of bandwidth--four times what convention NTSC broadcasts consume. That means that every broadcast program will require four times the spectrum on cable that it currently occupies, and that additional requirement comes at a time when broadcast TV's share of the total television audience is shrinking, not growing.
The way I see it, the most likely scenario is that the FCC will require all cable TV programming to be produced in HDTV format. HBO, Showtime and the Navel Lint Channel will all kick and scream about it, but--in the long run--the consumer will be the winner. When all TV is transmitted in HDTV, HDTV sets will become a lot more affordable.
Heck--you'll probably even get to keep your first-born child.
But, boy, is the cable industry ever going to need to beef up its carrying capacity. When all TV is HDTV, the amount of bandwidth needed just to accomodate the current programming load quadruples.
And let us not forget the gigabit demands of cable-modem Internet access.
There's only one long-term solution for the insatiable need for bigger pipes that the convergence of HDTV and cable-based Internet access will create in cable MSO-land, and that's ubiquitous fiber to the set-top. Consider that another form of convergence is simultaneously happening even now--the convergence between voice and data networks. And keep in mind that Internet content itself is growing richer--and thus more bloated--with every passing month.
Think that trend will reverse any time soon?
Me neither.
In fact, I think it's going to accelerate and expand. Once the pipes get fat enough, the relentless march of technology will produce ever newer and more bandwidth-intensive applications to soak it all up. For example, the primitive webcam technology with which we're all familiar today will evolve over time into a host of interactive video applications. Combine the trend toward home offices with the need for face-to-face interaction and you get immersive teleconferencing, combining application sharing with highly-detailed 3-D video and stereophonic audio to produce a simulacrum of a face-to-face meeting indistinguishable from the real thing.
Other than the smell of your boss's aftershave, that is.
That's not all. Think about what Doom and Quake II have wrought in the workplace now. The same technology that will bring us immersive teleconferencing, differently applied, will permit users to interact either alone or in groups with virtual environments so convincing that addiction to them will become a serious social problem.
Think I'm exaggerating?
Forget about immersive virtual combat and sports simulations. What do you think will happen when cybersex moves from fuzzy webcam images and one-handed typing to interactive tactile feedback, 3-D HDTV video and digital surround sound? How do you think that will affect 14-year-old surfers? How about 40-year-olds?
I think it will bring a whole new dimension to the concept of a mid-life crisis--and a very different slant on dating!
Well, okay, so the good news is that this dark cloud shows a hint of silver lining. With all the teenagers occupied with virtual orgies, at least the streets should be safe, right?
Don't count on it.
Where the Streets Have No Name
Since 1994, I've been warning my audiences about the dangers of the growing gap between the digital "haves" and "have-nots". We are in danger of creating an institutionalized underclass whose denizens, unlike their predecessors, will have essentially zero chance of ever "rising above their station in life".
How so?
Virtually every middle-class school child in the U.S.A. now has routine access to a personal computer. Most of them also play platform games, program VCRs and otherwise live submerged in a digital environment that has become so ubiquitous that they don't even notice it. Like air, they breathe it in without concern that their supply might abruptly be cut off. They marinate in it from the time they're toddlers and, by the time they enter the workforce, they will have spent so much of their lives using it that, even without formal training, nearly every one of them will be perfectly comfortable working in a computer-intensive workplace.
The same is not true of the poor.
Although they have TVs, VCRs and even video games, what they mostly don't have is regular access to computers. In California schools, they're lucky if there's one computer per class. Over a 35-hour school week, that works out to one hour of computer time per pupil--and many of those computers are ancient DOS machines or even Apple II's, not modern Linux, Mac or Windows boxes.
Those children will enter the workforce with an awful liability. And even motivation and talent won't enable them to overcome the advantage that literally a lifetime of exposure will give their middle-class cohorts in competing for jobs and promotions. They'll be stuck in a hole not of their own making and--much, much worse--they'll pass that disadvantage along to their own children.
That's a prescription for class warfare, if I ever heard one.
The promise of America, indeed, the promise of the developed world, is that anyone, no matter their natal station, can aspire to rise to the top through hard work, dedication and committment. When that promise is withdrawn from a whole class of citizens, we lay the groundwork for a new caste system--and the seeds of a poisonous envy and hatred that no amount of good intentions can overcome.
Without hope, there is no future for the poor.
How much worse will it be, then, when the rich can afford not only to immerse their own children in the most powerful computer environment, but can actually have their genetic code optimized at conception? That, too, is on the horizon--and perhaps not as far away as you might think.
The Human Genome Project expects to have a complete and accurate sequence of the entire human genetic code published by 2003. Ian Wilmut cloned a sheep in 1997 and Richard Seed announced in January, 1998, that he plans to clone a human as soon as he can arrange financing for his work. Gene therapy--using retroviruses to implant new genetic material in adult or fetal organisms--is in use in numerous clinical trials of everything from remedies for congenital diseases to cancer therapies.
So the rich--and maybe even the middle class--will be able to afford to have their children improved, enhanced, altered to better compete. The poor will not.
Perhaps, though, genetic technology will give the poor someone to whom they can feel superior. When manipulation of germ plasm is routine, how long will it be before someone decides to endow one of the lower orders--a chimpanzee, for instance--with human-level intelligence? And how will society deal with the ethical and moral challenges that capability will create?
Consider the advantages to a blind person of having a seeing-eye dog with the power of speech and the ability to read signs, newspapers and subtitles. How will the legal system treat their relationship? As owner and chattel? As equals? Or as something in between?
How will we deal with animals striking for better working conditions? For better pay? For equal rights?
Don't dismiss those questions as fantasy, because the reality they represent is right around the corner. It may arrive within my lifetime--especially if medical science continues to advance by the same leaps and bounds as computer and telecommunications technologies.
I Fought the Law
Throughout the 20th Century, the pace of technological advance has increased, not steadily, but explosively. Nowhere has that been more true than in the computer industry. The increasing sophistication of processors and a concomitant drop in the cost of computing prompted Gordon Moore of Intel to promulgate what has come to be known as Moore's Law: "18 months, the speed of processors is doubled and their cost is halved."
For years now, I've insisted that there is no "law" involved, and that, more properly, the quote in question ought to be called Moore's Observed Trend. On September 30, 1997, addressing the Intel Developer's Forum, no less an authority than Gordon Moore himself agreed with me.
Moore pointed out, as I have been doing for some time, that his Observed Trend will soon run into some fairly fundamental physical limits. In particular, when semiconductor traces get down to two or three atoms in width, it's going to be darned difficult to shrink them any further. Worse still, long before they get that thin, the problem of waste heat production will become extremely serious--an Intel processor running at 1 gigahertz, using 18 micron traces at 3.3 volts would consume 40 watts of power in a physical space only four times the size of a Pentium II.
Instant slag.
Even if semiconductor manufacturers move from aluminum through copper to gold traces, power consumption at high speeds will be their biggest bugaboo. Moore believes that the theoretical limits to his law could be reached within as little as five chip generations.
I won't be eligible for Social Security by then. Will you?
Lowering voltages and moving to metallic traces with lower electrical resistance will help. But, sooner or later, the speed of light is going to raise its ugly head.
Now, it's possible that quantum electron tunneling could provide a solution that would allow discrete chips to act as if they were integrated into a single, massive processor, thereby sidestepping both the speed of light limit and the waste heat problem--but I wouldn't count on it. So far, there doesn't seem to be any way to outfox Heisenberg's Uncertainty Principle and permit quantum tunneling to carry actual information.
Instead, I'd think about nanotechnology and massively parallel computing.
Nanotech--technology based on machines from the size of human cells down to the size of viruses--is on the brink of moving from the speculative to the proof of concept stage. In 1989, IBM scientists in San Jose, California used a scanning tunneling microscope to position 35 individual xenon atoms chilled to -270 degrees C to spell out the letters IBM. In January, 1996, their Zurich, Switzerland colleagues managed a similar feat at room temperature and, in July, 1998, they created a molecule-sized rotor whose rotation they could start and stop using the STM.
Small beginnings, perhaps, but most great technological advances arose from such humble origins. And nanotechnology holds the potential of providing enormous benefits to the human race--and equally enormous risks.
Nanotech computers could bring massive power to bear on any problem that can benefit from parallel tasking--and those problems include image and audio processing, animation, CAD, drug design, nuclear and weather simulations and cryptography. Nano surgical tools could operate simultaneously on billions of cells, correcting defects or adding capabilities. Nano medical agents could scour the circulatory system for lipid pockets, cancer cells and other undesirable residents. Nano mining tools could extract and process ore, eliminating the need for refineries. Nanoscopic "makers" could manufacture finished consumer goods, or even foodstuffs to order from inexpensive raw materials, right in people's homes.
At the same time, nano bugging devices could virtually eliminate privacy. Nano warfare could threaten the very existence of humanity--and a simple mistake in programming a von Neumann machine could result in what nanotechnologists call the "gray goo" scenario, where cancerously-reproducing nanobots reduce the entire surface of the Earth to a uniform mass of nanobots. Then there's the terrorist "red goo" scenario to worry about.
And the prospect of "advertising goo" is just too grisly to contemplate.
Legend of a Mind
But hardware isn't the only way that technological advances manifest. Software, too, will become increasingly sophisticated and feature-rich in the coming century. That will lead, in turn, to improvements in the content of everything from movies to video games. With faster and faster hardware to assist, special effects will not only become more and more difficult to distinguish from reality, they will get less and less expensive to produce.
Eventually, "news" programs will be able to create to order simulations of the events upon which they report. They won't need stringers to report on disasters in Central America. Instead, they'll just create simulacra so realistic that their viewers won't be able to tell the difference.
Of course, bottom feeders in the industry will use that same technology to "recreate" actions that actually happened--if they happened at all--behind closed doors. Technology will make us all voyeurs in the windowless halls of the future. And the sincere, carefully-coiffed newsreaders who take us there may themselves be nothing more than animated electrons employing artificial intelligence routines to evoke the appropriate catch in their digitized voices, the knowing wink and the sad shake of the head that helps us all feel superior to the objects of their reportage.
But wait! That's not all!
Already, companies such as General Magic are hard at work on so-called "agent" technology—programs designed to extend the reach of the user by monitoring and searching data while the user himself is offline or busy elsewhere. Quasi-autonomy will, long before the century is out, evolve into true machine intelligence.
And those machines will swiftly become more intelligent than even the brightest of us. Unburdened by our limitations, unsleeping, unemotional, tapped directly into the global datanet, they will be formidable competitors, if we allow them to be. How will we handle the challenge they present?
One solution may be to become more like them. In the coming decade, palmtops will be supplanted by wearable computers--smart watches, glasses and clothing. The step from there to implanted computers is a small one--and the potential rewards are significant. We, too, can be plugged directly into the net. We can have our senses and our analytical faculties enhanced by machines that are, in a very real sense, a part of us. We can merge with our computers.
Some of us will.
And all of us will be faced with the same sorts of ethical, moral and legal dilemnas that we will face when we bestow intelligence on lower animals. Do we treat intelligent computers as chattels or as persons? Do we dare give the vote to machines that are smarter than us?
Do we dare withhold it?
Man on the Moon
We're going to need all the help we can get as the reach of the datasphere extends into planetary space. As it stands, the space shuttles are wired into the Mbone and much of our global data traffic is transmitted via satellite. Soon Space Station Freedom will join the net.
It won't be alone for long.
With the discovery of water on the Moon, it's not going to take too much time before there are Lunar colonies--and they will be hooked into the net, too. Despite the speed-of-light lag, we'll want them to have the continuous communication and collaboration capabilities that only net residency affords.
Unless I miss my guess, before the century is out, the moon colonies will be joined by Martian and asteroidial settlements. These space-born communities will inevitably yearn for the freedom and self-determination that is granted only to nations and the result will be independence--however long that takes and at whatever cost.
But the omnipresent Internet--or whatever we name its successor--will allow the creation of countries with no physical boundaries. Like the net itself, these virtual nations will transcend geography, ethnicity and language. Their citizens will reside anywhere and everywhere that there are humans.
Perhaps even on the Moon.
We are already witnessing the Internet acting as the agent for the downfall of dictatorships. That trend will continue. Power built on lies cannot withstand the antiseptic of fact--and the net is a conduit of information more than it is any other single thing. Those who would resist its power do so at the cost of keeping their populations in ignorance and darkness. Thus cut off from the mainstream of human commerce, they cannot compete--and they must fall.
It happened in the Soviet Union. It is happening--slowly but unstoppably--in China, in Burma and in the developing nations of Africa and South America. Nothing can stop it.
But some will try.
We have yet to experience an identifiable act of digital terrorism--but that, too, will come. Somewhere, someone will purposefully crash a power grid, an air traffic control system or an emergency communications PBX. Someone, somewhere will, with malice aforethought, hack an early warning system computer to create the impression of a sneak attack, or send an airliner plunging into a mountain side.
It hasn't happened yet--at least, we don't know if it's happened yet--but it will.
Some of these acts will be those of individual sociopaths. Some will be collateral acts by political groups determined to exploit them as means to an end.
And some, perhaps, will be the warning signs of a Luddite jihad.
Because not everyone sees technology as universally beneficial and welcome. Many people fear change--and technological advances are at once harbingers and atavars of change. The myth of "the good old days" is a powerful siren song to those who are confused and dismayed by a world that refuses to stand still--that insists on relentlessly changing.
We who gladly surf the seas of change are their enemies--and they will fight us.
But we will win, because change is simply inevitable. Societies progress or they die. There is no middle ground.
So, don't be afraid of the changes our industry will visit upon us in the coming decades. Seek new alliances and new solutions. Welcome change as a bringer of new opportunities. Don't let change take you by surprise. Keep your head up and look for early signs of changes that will affect you.
Because change is surely coming.
(Copyright© 1999 by Thom Stark--all rights reserved)
|