When I started my blog a decade ago, I called myself a “Technology Optimist” in my first post. I wrote that
I am excited to be living in a time when we are making tremendous progress on understanding aging, fighting cancer, developing clean technologies, and so much more. This is not to say that I automatically assume that technology by itself will solve all our problems (I guess that would be a “technology pollyanna”). Instead I believe that—over time—we as a society figure out how to use technology to actually improve our standard of living. I for one am sure glad I am not living in the Middle Ages.
The fundamental tenor of this book is one of optimism. This is in part a reflection of my personality. I am pretty sure it would be impossible to be a VC as a pessimist. You would focus only on the many reasons why a particular startup won't succeed and never make an investment.
Optimism is a theme that I will return to many times in this book and so it is a good idea to make this apparent bias of mine clear upfront. It is more than a personal bias though. Optimism has a profound role in human affairs and its source is the power of knowledge. Knowledge has given us vaccines and cures to many diseases. Knowledge lets us travel long distances at high speeds in trains and planes. Knowledge lets us read Aristotle and listen to Mozart. Knowledge is what makes us humans human (in a way I will make more precise shortly).
I am optimistic about what humanity can ultimately accomplish with digital technology. Using the Internet and advances in machine intelligence we can dramatically accelerate the creation and distribution of knowledge. This will be essential for progress.
Progress has become a loaded word. Is there such a thing as true progress and what does it look like? Aren't we humans responsible not only for the many diseases of civilization but also for the downright extinction of countless species and potentially our own demise through climate change?
Yes, we do have problems. And one might, as a pessimist, focus on these problems and conclude they cannot be solved. This is like looking at a startup and concluding there is no point in even getting going—or funding it—because, well, there will be problems.
The beauty of problems, though, is that they can be overcome by human knowledge. Is that true for all problems? Well it has been true so far, as we are still here.
This is in and of itself quite remarkable: we are slower and weaker than many other species, but humans alone have developed the capacity for knowledge. And knowledge turns out to be extraordinarily powerful. It allowed us to figure out, for instance, how to make fire. We may take this for granted today, but no other species has managed to do this and to record its knowledge of fire making in a way that can be shared across space and time (I will shortly provide a more precise definition of knowledge and why it is quite so powerful).
There is an extreme position that would suggest we would have been better off never developing knowledge . That we would still live in a state of paradise had we not tasted the forbidden fruit. Not only is it hard to see how we would go back there now, but more importantly, I for one prefer not to be consumed by wild animals.
Will all future problems be solvable, including say climate change? There is, of course, no guarantee. We might wind up with a problem we cannot solve and that might cause our extinction. But what is certain is that assuming that problems cannot be solved guarantees that they will not be solved. Pessimism is a self-defeating attitude, as it leads to inaction.
Yes, digital technologies including the Internet and advances in automation have brought with them a new set of problems. We will encounter many in this book, including immense pressure on people's ability to earn a living and the conflicts arising from being exposed to content that runs counter to one's upbringing or deeply held cultural or religious beliefs.
And yet this expanded space of the possible also includes amazing progress, such as zero marginal cost diagnosis of disease for anyone anywhere in the world, the example we encountered at the end of the previous chapter.
Believing in the potential for real progress though is not the same as being a Pollyanna. Progress does not happen by itself as a deterministic function of technology. Contrary to Kevin Kelly's claims in his book “What Technology Wants”, technology doesn't want anything by itself and certainly not a better world for humanity. It simply makes such a world possible.
Economics also doesn't want anything. It is not normative. Nothing in economics, for instance, says that a new technology cannot make some people or possibly a great many people worse off. Economics gives us tools for analyzing markets and designing regulations to address some of their failures. But we still need to make choices about what we want markets and regulations to accomplish for humanity.
And contrary to Karl Marx, history too doesn't want anything. Nor is there, as political economist Francis Fukuyama would have it, an end of history with a final social, economic and political system. History is the result of human choices; it doesn't make its own choices. And as long as we make technological progress there will be new choices to make.
It is our responsibility, both individually and collectively, to make choices about which of the many worlds made possible by digital technology we want to live in. We need to choose rules for society (regulation) and behaviors for ourselves (self-regulation). And the choices we make now are especially important because the latest expansion of the space of the possible includes machines that have knowledge and can make choices.
There are many people who work in technology and investing who are optimists and believe in progress. Among those there is a subset, myself included, who also believe in the need for regulation. There is another group though that has a decidedly libertarian streak and would like for government to just get out of the way.
The history of technological progress is one of changes in social norms and political regulations. For instance, at the moment much of the world gets around by driving cars. The car was an important technological innovation in that it allowed for individual mobility. But it would have been impossible to have widespread adoption of cars without regulation. We needed to agree on rules of the road and we also needed to build roads. Neither of these could have been accomplished based solely on individual choices. Roads and their rules are examples of natural monopolies: you don't want to have multiple disjointed road networks or different sets of rules of the road (imagine some people driving on the left side and others on the right). Natural monopolies are classic examples of market failure that require regulation. The car would also not have made much sense as individual transport without changes in social norms, such as making it acceptable for women to operate a car (a change that did not take place in Saudi Arabia until the end of 2017 ).
Not all regulation will be good regulation. In fact, the earliest regulation of automotive vehicles was aimed at delaying their adoption by limiting their speed to that of a horse drawn carriage and in some cases even requiring them to be preceded by someone carrying a flag .
Similarly, not all regulation of digital technology will be good regulation. Much of it will initially aim to protect the status quo and help incumbent enterprises, such as the recently enacted changes to net neutrality rules . But that is no reason to call for an absence of regulation. It should be seen, instead, as a challenge to come up with the right regulation as we did eventually in the case of cars.
My proposals for regulation later in the book are aimed at being pro-innovation by giving more economic freedom to individuals and by giving them better access to information (informational freedom). These regulations are choices we need to make collectively. They represent a big departure from the past aimed at letting us explore the space of the possible opened up by digital technologies so that we can transition from the Industrial Age to the Knowledge Age.
There is another set of choices we need to make individually. These have to do with how we react to the massive acceleration of information dissemination and knowledge creation made possible by digital technology. These are not rules society can or should impose because they relate to our inner mental states.
For instance, there are a lot of people at the moment who feel offended by content that is available on the Internet. People are yelling, insulting and even threatening in comment threads and forums. Others spend all their time in polarized online communities being fed algorithmically curated information which confirms only their existing biases, in a phenomenon that has become known as a “filter bubble”. Even though some technology and regulation can help here, fundamentally overcoming these problems requires internal changes which I later describe in a section called psychological freedom.
Changing ourselves requires self-regulation. By this I mean training our capacity as individuals to use our rationality. From Eastern religions including Hinduism and Buddhism, to the Stoics in ancient Greece, there is a long tradition of understanding how we can get past our immediate emotional and heuristic brain responses. Much of this lines up well with what we have uncovered more recently about the workings of the human brain.
If we want to have true progress leveraging digital technologies, we need to get past our initial emotional responses and figure out how to maintain a rational dialog. Only then will our choices on where to go in the dramatically enlarged space of the possible be based on our critical thinking abilities.
Much of what I have been saying here about optimism, the potential for progress and the need for regulation and self-regulation could immediately be attacked as coming from the perspective of a white male venture investor living in the United States. As such it might be deemed a privileged view that I am attempting to impose on others.
The next chapter will argue instead that Humanism provides an objective foundation of values for this perspective that applies to all of humanity.