Informational Freedom

Can you read any book you want to? Can you listen to all the music that has ever been recorded? Do you have access to any web page at all you wish to consult? Can you see your own medical record? Other people's medical records?

Historically questions like this would not have made much sense, as copying and distributing information was quite expensive. In the early days of writing, for instance, when humans literally copied text by hand, copies of books were rare, costly, and also subject to copy errors (unintentional or intentional). Few people in the world at that time had access to books, and even if some power had wanted to expand access, it would have been difficult to do so because of the immense cost involved.

In the age of digital information, when the marginal cost of making a copy and distributing it has shrunk to zero, all limitations on digital information are in a profound sense artificial. They involve adding cost back to the system in order to impose scarcity on something that is abundant. As an example, billions of dollars have been spent on trying to prevent people from copying digital music files and sharing them with their friends or the world at large.

Why are we spending money to make information less accessible? When information existed only in analog form, the cost of copying and distribution allowed us— to some degree required us—to build an economy and a society grounded on information's scarcity. A music label, for instance, had to recruit talent, produce recordings, market them, distribute them, and so on, and charging for records allowed the label to cover its costs and turn a profit. In a world where individuals can produce music and distribute it for free to the entire world, music labels in their traditional form should become obsolete. The business model of charging for recorded music and the copyright protections required to sustain it are remnants of the industrial age.

We take many artificial restrictions on information access and distribution for granted because we and a couple of generations before us have grown up with them. This is the only system we know and much of our personal behavior, our public policies and our intellectual inquiries are shaped by what we and our recent ancestors have experienced. To transition into a knowledge society, however, we should jettison much of this baggage and strive for maximum informational freedom.

Let's be clear: Information is not the same as knowledge. It is a broader concept, including, for instance, the huge amounts of log files generated every day by computers around the world, much of which may never be analyzed. We don't know in advance what information will turn out to be the basis for knowledge (i.e., information meant for other humans and which humans choose to maintain over time). Hence it makes sense to keep as much information as possible and make access to that information as broad as possible.

In this section we will explore various ways to expand informational freedom, the second important regulatory step to facilitate our transition to a knowledge society.

Access to the Internet

On occasion, the Internet has come in for derision from those who claim it is only a small innovation compared to, say, electricity or vaccinations. Yet it is not small at all. If you want to learn how electricity or vaccinations work, the Internet suddenly makes that possible for anyone, anywhere in the world. In fact, absent artificial limitations re-imposed on it, the Internet comprises the means of access to and distribution of all human knowledge — including all of history, art, music, science, and so on—to all of humanity. As such, the Internet is a crucial enabler of a knowledge society, and access to it should be regarded as a basic human need, as necessary as food, clothing, or shelter (see earlier section on Basic Needs).

At present, over 3.5 Billion people are connected to the Internet, and we are adding more than 200 Million additional users every year [47]. This tremendous growth has become possible because the cost of access has fallen so dramatically. A capable smartphone costs as little as $100 to manufacture, and bandwidth is provided at costs as low as $0.17 per Mbps in some cities, such as Seoul and Hong Kong (that's Megabits per second. You need about 1-5 Mbps to stream a movie in SD quality) [48] [49]. Put differently, the cost to connect all of humanity to the Internet and thus eventually to all of human knowledge as our collective inheritance and ongoing project is only $x annually. That is less than y% of the size of the global economy.

Given these numbers, we can easily see how we might cover access to the Internet with a Basic Income. Ongoing technological innovation, such as MIMO wireless technology, will further lower prices for bandwidth and the necessary end-user devices.

Even if we connect everyone to the Internet, we still must address other limitations to the free flow of information. In particular, we should all oppose restrictions on the Internet imposed by either our governments or our Internet Service Providers (ISPs, the companies we use to get access to the Internet). Both of them have been busily imposing artificial restrictions, driven by a range of economic and policy considerations.

One Global Internet

By design, the Internet does not embody a concept of geographic regions. Most fundamentally, it constitutes a way to connect networks with one another (hence the name “Internet” or network between networks). Since the Internet works at global scale, it follows that any geographic restrictions that exist have been added in, often at great cost. For instance, Australia and the UK have recently built so-called “firewalls” around their countries that are not unlike the much better-known Chinese firewall. These firewalls are not cheap. It cost the Australian government about $44 million to build its geographic-based, online perimeter [50]. This is extra equipment added to the internet that places it under government control, restricting our informational freedom. As citizens, we should be outraged that our own governments are spending our money to restrict our informational freedom.

No Artificial Fast and Slow Lanes

The same additional equipment used by governments to re-impose geographic boundaries on the internet is also used by ISPs to extract additional economic value from customers, in the process distorting knowledge access. These practices include paid prioritization and zero rating. To understand them better and why they are a problem, let's take a brief technical detour.

When you buy access to the Internet, you pay for a connection of a certain capacity. Let's say that is 10 Mbps (that is 10 Megabits per second). So if you use that connection fully for, say, sixty seconds, you would have downloaded (or uploaded for that matter) 600 Megabits, the equivalent of [find example here]. The fantastic thing about digital information is that all bits are the same. So it really doesn't matter whether you used this to access Wikipedia, to check out Khan Academy, or to browse images of LOLCats. Your ISP should have absolutely no say in that. You have paid for the bandwidth, and you should be free to use it to access whatever parts of human knowledge you want.

That principle, however, doesn't maximize profit for the ISP. To do so, the ISP seeks to discriminate between different types of knowledge based on consumer demand and the supplier's ability to pay. Again, this has nothing to do with the underlying cost of delivering those bits. How do ISPs discriminate between different kinds of knowledge? They start by paying to install equipment that lets them identify bits based on their origin. They then go to a company like Youtube or Netflix and ask them to pay money to the ISP to have their traffic “prioritized,” while intentionally slowing down the traffic from other sources that are not paying.

The solution to this issue goes by the technical and boring name of Net Neutrality. But what is really at stake here is informational freedom. Our access to human knowledge should not be skewed by our ISPs.

ISPs can get away with restricting access to knowledge in the first place because in most geographic areas, no competitive market for Internet access exists. ISPs either have outright monopolies or they operate in small oligopolies. For instance, in the part of Chelsea where I live at the moment, there is just one broadband ISP. Over time technological advances such as wireless broadband and mesh networking may make this market more competitive. Until then, however, we need regulation to restrict the ability of ISPs to limit our informational freedom.

[Add section on zero rating here? Including opposition in developing countries including India?]

Bots for All of Us

Once you have access to the Internet, you need software to connect to its many informational sources and services. When Sir Tim Berners-Lee first invented the World Wide Web in 1989 to make information sharing on the Internet easier, he did something very important [51]. He specified an open protocol, the Hypertext Transfer Protocol or HTTP, that anyone could use to make information available and to access such information. By specifying the protocol, Berners-Lee opened the way for anyone to build software, so-called web servers and browsers that would be compatible with this protocol. Many did, including, famously, Marc Andreessen with Netscape. Many of the web servers and browsers were available as open source and/or for free.

The combination of an open protocol and free software meant two things: Permissionless publishing and complete user control. If you wanted to add a page to the web, you didn't have to ask anyone's permission. You could just download a web server (e.g. the open source Apache), run it on a computer connected to the Internet, and add content in the HTML format. Voila, you had a website up and running that anyone from anywhere in the world could visit with a web browser running on his or her computer (at the time there were no smartphones yet). Not surprisingly, content available on the web proliferated rapidly. Want to post a picture of your cat? Upload it to your webserver. Want to write something about the latest progress on your research project? No need to convince an academic publisher of the merits. Just put up a web page.

People accessing the web benefited from their ability to completely control their own web browser. In fact, in the Hypertext Transfer Protocol, the web browser is referred to as a “user agent” that accesses the Web on behalf of the user. Want to see the raw HTML as delivered by the server? Use “view source.” Want to see only text? Instruct your user agent to turn off all images. Want to fill out a web form but keep a copy of what you are submitting for yourself? Create a script to have your browser save all form submissions locally as well.

Over time, popular platforms on the web have interfered with some of the freedom and autonomy that early users of the web used to enjoy. I went on Facebook the other day to find a witty post I had written some time ago on a friend's wall. It turns out that Facebook makes finding your own wall posts quite difficult. You can't actually search all the wall posts you have written in one go; rather, you have to go friend by friend and scan manually backwards in time. Facebook has all the data, but for whatever reason, they've decided not to make it easily searchable. I'm not suggesting any misconduct on Facebook's part; that's just how they've set it up. The point, though, is that you experience Facebook the way Facebook wants you to experience it. You cannot really program Facebook differently for yourself. If you don't like how Facebook's algorithms prioritize your friends' posts in your newsfeed, then tough luck, there is nothing you can do.

Or is there? Imagine what would happen if everything you did on Facebook was mediated by a software program — a “bot” — that you controlled. You could instruct this bot to go through and automate for you the cumbersome steps that Facebook lays out for finding past wall posts. Even better, if you had been using this bot all along, the bot could have kept your own archive of wall posts in your own data store (e.g., a Dropbox folder); then you could simply instruct the bot to search your own archive. Now imagine we all used bots to interact with Facebook. If we didn't like how our newsfeed was prioritized, we could simply ask our friends to instruct their bots to send us status updates directly so that we can form our own feeds. With Facebook on the web this was entirely possible because of the open protocol, but it is no longer possible in a world of proprietary and closed apps on mobile phones.

Although this Facebook example might sound trivial, bots have profound implications for power in a networked world. Consider on-demand car services provided by companies such as Uber and Lyft. If you are a driver today for these services, you know that each of these services provides a separate app for you to use. The closed nature of these apps makes it very hard for you to participate in more than one network at a time. What would happen, though, if you had access to bots that could interact on your behalf with these networks? That would allow you to simultaneously participate in all of these marketplaces, and to play one off against the other.

Using a bot, you could set your own criteria for which rides you want to accept. Those criteria could include whether a commission charged by a given network is below a certain threshold. The bot, then, would allow you to accept rides that maximize the net fare you receive. Ride sharing companies would no longer be able to charge excessive commissions, since new networks could easily arise to undercut those commissions. For instance, a network could arise that is cooperatively owned by drivers and that charges just enough commission to cover its costs. The mere possibility that a network like this could exist would substantially reduce the power of the existing networks.

We could also use bots as an alternative to anti-trust regulation to counter the overwhelming power of technology giants like Google or Facebook without foregoing the benefits of their large networks. These companies derive much of their revenue from advertising, and on mobile devices, consumers currently have no way of blocking the ads. But what if they did? What if users could change mobile apps to add Ad-Blocking functionality just as they can with web browsers?

Many people decry ad-blocking as an attack on journalism that dooms the independent web, but that's an overly pessimistic view. In the early days, the web was full of ad-free content published by individuals. In fact, individuals first populated the web with content long before institutions joined in. When they did, they brought with them their offline business models, including paid subscriptions and of course advertising. Along with the emergence of platforms such as Facebook and Twitter, this resulted in a centralization of the web. More and more content was produced either on a platform or by traditional publishers.

Ad-blocking is an assertion of power by the enduser, and that is a good thing in all respects. Just as a judge recently found that taxi companies have no special right to see their business model protected, neither do ad-supported publishers. And while in the short term this might prompt publishers to flee to apps, in the long run it will mean more growth for content that is crowdfunded and/or micropayed, freely shareable and published using open formats. Rather than being the end of the open web, ad-blocking is really the beginning of its renaissance!

To curtail the centralizing power of network effects more generally, we should shift power to the endusers by allowing them to have user agents for mobile apps, too. The reason users don't wield the same power on mobile is that native apps relegate endusers once again to interacting with services just using our eyes, ears, brain and fingers. No code can execute on our behalf, while the centralized providers use hundreds of thousands of servers and millions of lines of code. Like a web browser, a mobile user-agent could do things such as strip ads, keep copies of my responses to services, let me participate simultaneously in multiple services (and bridge those services for me), and so on. The way to help endusers is not to have government smash big tech companies, but rather for government to empower individuals to have code that executes on their behalf.

What would it take to make bots a reality? We might require companies like Uber, Google, and Facebook to expose all of their functionality, not just through standard human usable interfaces such as apps and web sites, but also through so-called Application Programming Interfaces (APIs). An API is for a bot what an app is for a human. The bot can use it to carry out operations, such as posting a status update on a user's behalf. In fact, companies such as Facebook and Twitter have APIs, but they tend to have limited capabilities. Also, companies presently have the right to control access so that they can shut down bots, even when a user has clearly authorized a bot to act on his or her behalf.

Bots that we all can deploy to gain more power online are technically feasible. It comes down to regulation. Instead of requiring companies to provide an API that any bot I have accessed can authorize, we could also make it legal to reverse engineer how apps communicate. Currently, reverse engineering is impossible because of so-called anti-circumvention laws, including a key provision in the Digital Millennium Copyright Act (DMCA). These laws allow companies to restrict access to private encryption keys inside an app, which users would require in order to reverse-engineer it. The legal framework today works primarily to protect companies and their servers from bots instead of allowing end-users to be empowered by them.

Now, don't companies need to protect their encryption keys? Aren't “bot nets” the culprits behind all those so-called DDOS (distributed denial of service) attacks? Yes, there are a lot of compromised machines in the world, including set top boxes and home routers that some are using for nefarious purposes. Yet that only demonstrates how ineffective the existing laws are at stopping illegal bots. Because those laws don't work, companies have already developed the technological infrastructure to deal with the traffic from bots.

How would we prevent people from adopting bots that turn out to be malicious code? Open source seems like the best answer here. Many people could inspect a piece of code to make sure it does what it claims. But that's not the only answer. Once people can legally be represented by bots, many markets currently dominated by large companies will face competition from smaller startups.

Legalizing representation by a bot would eat into the revenues of large companies, and we might worry that they would respond by slowing their investment in infrastructure. I highly doubt this would happen. Uber, for instance, was recently valued at $50 billion. The company's “takerate” (the percentage of the total amount paid for rides that they keep) is 20 percent. If competition forced that rate down to 5%, Uber's value would fall to $10 billion as a first approximation. That is still a huge number, leaving Uber with ample room to grow. As even this bit of cursory math suggests, capital would still be available for investment, and those investments would still be made.

That's not to say that no limitations should exist on bots. A bot representing me should have access to any functionality that I can access through a company's website or apps. It shouldn't be able to do something that I can't do, such as pretend to be another user or gain access to private posts by others. Companies can use technology to enforce such access limits for bots; there is no need to rely on regulation.

Even if I have convinced you of the merits of bots, you might still wonder how we might ever get there from here. The answer is that we can start very small. We could run an experiment with the right to be represented by a bot in a city like New York. New York's municipal authorities control how on demand transportation services operate. The city could say, “If you want to operate here, you have to let drivers interact with your service programmatically.” And I'm pretty sure, given how big a market New York City is, these services would agree.

Limiting the Limits to Sharing and Creating

Once we have fought back geographical and prioritization limits and have bots in place so that all users can meaningfully control their own interactions with the global knowledge network, we still come up against limits that restrict which information you can share and what you can create based on how you obtained the information. We'll first look at copyright and patent laws and suggest policies for reducing how much these limit the knowledge loop. Then we'll turn to confidentiality and privacy laws.

Earlier I remarked how expensive it was to make a copy of a book when human beings literally had to copy it one letter at a time. Eventually we invented the printing press, and after that moveable type. Together the two provided for much faster and cheaper reproduction of information. Even back then, governments and also the church saw this as a threat to their authority; in England, the Licensing of the Press Act of 1662 predated attempts to censor the web by more than 300 years [52]. Their solution was simple: license printers. If you wanted to operate a press, and if you wanted the right to make copies, you needed the government's approval. You received it in exchange for agreeing to censor content critical of the government or that ran counter to church teachings. And that's the origin of copyright.

Over time, as economies grew and publishing companies emerged as business enterprises, copyright became commercially meaningful, less as an instrument of government control and more as a source of profit. The logic runs like this: “If I have the copyright to a specific material, then you cannot make copies of it, which means that I essentially have a monopoly in providing this content. I am the only one allowed to produce and sell copies of it.”

Legitimating this shift was the idea that in order to get content produced in the first place, incentives needed to exist for the creators of content, just as incentives needed to exist for people to create tangible or material goods. If you own your factory, then you will invest in it because you get to keep the benefits from those improvements. Similarly, the thinking goes, if you are working on a book, you should own the book so that you have an incentive to write it in the first place and improve it over time through revisions.

Over time the holders of copyrights have worked to strengthen their claims and extend their reach. For instance, with the passing of The Copyright Act of 1976, the requirement to register a copyright was removed. Instead, if you created content you automatically had copyright in it [53]. Then in 1998 with passage of the Copyright Term Extension Act, the years for which you had a copyright were extended from 50 to 70 years beyond the life of the author. This became known as the Mickey Mouse Protection Act, because Disney had lobbied the hardest for it, having built a very large and profitable enterprise, and mindful that a number of its copyrights were slated to expire [54].  More recently, copyright lobbying has attempted to interfere with the publication of content on the Internet through legislation such as PIPA and SOPA. [Expand, including TPP] In these latest expansion attempts, the conflict between copyright and the digital knowledge loop becomes especially clear. Copyright severely limits what you can do with content, essentially down to consuming the content. It dramatically curtails your ability to share it and create other works that use some or all of the content. Some of the more extreme examples include takedowns of videos from Youtube that used the Happy Birthday song, which, yes, is [was?] copyrighted. [More examples]

From an economic standpoint, it is never socially optimal to prevent someone from listening to a song or watching a baseball game. Since the marginal cost is zero, the world is better off if that person gets just a little bit of enjoyment from that content. And if that person turns out to be inspired and write an amazing poem that millions read, well then the world is a lot better off.

Now, you might say, it's all well and good that the marginal cost for making a copy is zero, but what about all the fixed and variable cost that goes into making content? If all content were to be free, then where would the money come from for producing any of it? Don't we need copyright to give people the incentive to produce content in the first place?

Some degree of copyright is probably needed, especially for large-scale projects such as movies. Society may have an interest in seeing $100 million blockbuster films being made, and it may be that nobody will make them if, in the absence of copyright protection, they aren't economically viable. Yet here the protections should be fairly limited (for instance, you shouldn't be able to take down an entire website just because it happens to be streaming your movie). More generally, I believe copyright can be dramatically reduced in its scope and made much more costly to obtain and maintain. The only automatic right accruing to content should be one of attribution. The reservation of additional rights should require a registration fee, because you are asking for content to be removed from the digital knowledge loop.

Let's take music as an example. Musical instruments were made as far back as 30,000 years ago, pre-dating any kind of copyright by many millennia. Even the earliest known musical notation, which marks music's transition from information to knowledge (again, defined as something that can be maintained and passed on by humans over time and distance), is around 3,400 years old [55]. Clearly people made music, composed it, shared it long before copyright existed. In fact, the period during which someone could make a significant amount of money making and then selling recorded music is extraordinarily short, starting with the invention of the gramophone in the 1870s and reaching its heyday in 1999, the year that saw the biggest profits in the music industry [56].

During the thousands of years before this short period, musicians made a living either from live performances or through patronage. If copyrighted music ceased to exist tomorrow, people would still compose, perform, and record music. And musicians would make money from live performances and patronage, just as they did prior to the rise of copyright. Indeed, as Steven Johnson found when he recently examined this issue [quote NY Magazine piece], that's already what is happening to some degree. Many musicians have voluntarily chosen to give away digital versions of their music. They release tracks for free on Soundcloud or Youtube and raise money to make music from performing live and/or using crowdfunding methods such as Kickstarter and Patreon.

Now imagine a situation where the only automatic right accruing to an intellectual work was one of attribution. Anyone wanting to copy or distribute your song in whole or in part has to credit you. Such attribution can happen digitally at essentially no cost, and that it does not inhibit any part of the knowledge loop. Attribution imposes no restrictions on learning (making, accessing, distributing copies), on creating derivative works, and on sharing those. Attribution can include reference to who wrote the lyrics, who composed the music, who played which instrument and so on. Attribution can also include where you found this particular piece of music (i.e., giving credit to people who discover music or curate playlists).

Now, what if you're Taylor Swift and you don't want others to be able to use your music without paying you? Well, then you are asking for your music to be removed from the knowledge loop, thus removing all the benefits that loop confers upon society. So you should be paying for that right, which not only represents a loss to society but will be costly to enforce. I don't know how big the registration fee should be — that's something that will require further work — but it should be a monthly or annual fee, and when you stop paying it, your work should revert back to possessing attribution-only rights.

Importantly, in order to reserve rights, you should have to register your music with a registry, and some part of the copyright fee would go towards maintenance of these registries. Thanks to blockchain technology, competing registries can exist that all use the same global database. The registries themselves would be free for anyone to search, and registration would involve a prior search to ensure that you are not trying to register someone else's work. The search could and should be built in a way so that anyone operating a music sharing service, such as Spotify or Soundcloud, can trivially implement compliance to make sure they are not freely sharing music that has reserved rights.

It would even be possible to make the registration fee dependent on how many rights you want to retain. All of this could be modeled after the wildly successful Creative Commons licenses. For instance, your fee might decrease if you allow non-commercial use of your music and also allow others to create derivative works. The fee might increase significantly if you want all your rights reserved.

Critics might object that the registration I'm proposing imposes a financial burden on creators. It is important to remember the converse: Removing content from the knowledge loop imposes a cost on society. And enforcing this removal, for instance by finding people who are infringing and imposing penalties on them, imposes additional costs on society. For these reasons, asking creators to pay is fair, especially if creators' economic freedom is already assured by a Universal Basic Income. We have generated so much economic prosperity that nobody needs to be a starving artist anymore!

Universal Basic Income also helps us dismantle another argument frequently wielded in support of excessive copyright: Employment at publishers. The major music labels combined currently employ roughly xxxxxx people. When people propose limiting the extent of copyright, others point to the potential loss of these jobs. Never mind that the existence of this employment to some degree reflects the cost to society from having copyright. Owners, managers and employees of music labels are after all not the creators of the music.

Before turning to patents, let me point out one more reason why a return to a system of paid registration of rights makes sense. None of us creates intellectual works in a vacuum. Any author who writes a book has read lots of writing by other people. Any musician has listened to tons of music. Any filmmaker has watched lots of movies. Much of what makes art so enjoyable these days is the vast body of prior art that it draws upon and can explicitly or implicitly reference. There is no “great man” or woman who creates in a vacuum and from scratch. We are all part of the knowledge loop that has already existed for millennia.

While copyright limits our ability to share information (and thus knowledge), patents limit our ability to use information (knowledge) to create something. Much like having a copyright confers a monopoly on the reproduction of information, a patent confers a monopoly to make use of information. And the rationale for the existence of patents is similar to copyright. The monopoly that is granted results in economic rents (i.e., profits) that are supposed to provide an incentive for people to invest in research and product development.

As with copyright, the incentive argument here should be suspect. People invented long before patents existed and since then people have chosen to invent without seeking patents. We can trace early uses of patents to Venice in the mid 1400s; Britain had a fairly well established system by the 1600s. That leaves thousands of years of invention, a time that saw such critical breakthroughs as the alphabet, movable type, the wheel, gears [other examples?]. This is to say nothing of those inventors who more recently chose not to patent their inventions because they saw how that would interrupt the knowledge loop and impose a loss on society. These inventors include Jonas Salk, who created the Polio vaccine (others include x rays, penicillin, ether as an anaesthetic, and many more, see [57]).

With a Universal Basic Income in place, more people will be able to spend their time inventing without the incentive provided by patent protection. Digital technologies will help by reducing the cost of inventing. One example of this is the USV portfolio company Science Exchange, which has created a market place for laboratory experiments. Let's say you have an idea that requires you to sequence a bunch of genes. The fastest Gene sequencing machine to date is the Illumina, which costs $x million to buy. Via Science Exchange, you can access such a machine on a per use basis for just over $1000 [58]. Furthermore, the next generation of sequencing machines is already on the way, and these machines will further reduce the cost. Here too we see the phenomenon of technological deflation at work.

A lot of recent legislation has needlessly inflated the cost of innovation. In particular, rules around drug testing have made drug discovery prohibitively expensive. We have gone too far in the direction of protecting patients during the research process and also of allowing for large medical damage claims. As a result, many drugs are either not developed at all or are withdrawn from the market despite their efficacy (for example the vaccine against Lyme disease).

Patents (i.e., granting a temporary monopoly) are not the only way to provide incentives for innovation. Another historically successful strategy has been the offering of public prizes. Britain famously offered the Longitude rewards starting in 1714 to induce solutions to the problem of determining a ship's longitude at sea (latitude can be determined easily from the position of the sun). Several people were awarded prizes for their designs of chronometers, lunar distance tables and other methods for determining longitude (including improvements to existing methods). As quid pro quo for receiving the prize money, inventors generally had to make their innovations available to others to use as well.

At a time when we wish to accelerate the digital knowledge loop, we must shift the balance towards knowledge that can be used freely and that is not encumbered by patents.

[Write about recent prize examples such as X Prizes, DARPA Grand Challenges, NIST competitions; crowdfunding for prizes; need for prizes in medicine to wrestle drug creation into the open]

Going forward, we can achieve this by using prizes more frequently. And yet, that leaves a lot of existing patents in place. Here I believe a lot can be done to reform the existing system and make it more functional, in particular by reducing the impact of so-called Non Practicing Entities (NPEs, commonly referred to as “patent trolls”). These are companies that have no operating business of their own, and that exist solely for the purpose of litigating patents.

In recent years, many NPEs have been litigating patents of dubious validity. They tend to sue not just a company but also that company's customers. This forces a lot of companies into a quick settlement. The NPE then turns around and uses the early settlement money to finance further lawsuits. Just a few dollars for them go a long way because their attorneys do much of the legal work on a contingency basis, expecting further settlements.

As a central step in patent reform, we thus must make it easier and faster to invalidate existing patents while at the same time making it more difficult to obtain new patents. Thankfully, we have seen some progress on both counts in the US, but we still have a long way to go. Large parts of what is currently patentable should be excluded from patentability in the first place, including designs and utility patents. University research that has received even small amounts of public funding should not be eligible for patents at all. Universities have frequently delayed the publication of research in areas where they have hoped for patents that they could subsequently license out. This practice has constituted one of the worst consequences of the patent system for the knowledge loop.

We have also gone astray by starting to celebrate patents as a measure of technological progress and prowess instead of treating them as a necessary evil (and maybe not even necessary). Ideally, we would succeed in rolling back the reach of existing patents and raising the bar for new patents while also inducing as much unencumbered innovation as possible through the bestowing of prizes and social recognition.

Getting Over Privacy and Confidentiality

Copyrights and patents aren't the only legal limitations impacting the digital knowledge loop. Privacy and confidentiality laws also loom large. I believe that someday all information should be public, including everyone's financial and health records. That may strike many readers as completely crazy, but countries like Sweden and Finland are already publishing everyone's tax return [Source?], and some individuals have also published their entire medical history on the Internet [Example?].

I come to my radical perspective here by comparing the costs and benefits to individuals and to humanity from keeping information private or confidential with the costs and benefits of making it public. In ways analogous to copyright, digital technology is dramatically shifting this cost/benefit tradeoff in favor of public information. Let's take a radiology image as an example. Analog x-ray technology produced images using a piece of film that had to be developed and that could then be examined by someone who was holding it up against a backlight. If you wanted to protect the information on it, you would put it in a file and lock up that file in a drawer. If you wanted a second opinion, you would have to get that file out of the drawer and have it sent it to you or the other doctor by mail. That process was costly, time consuming and error prone (the film could be lost in the mail, or the wrong film could be sent, etc.). The upside of analog x-rays was the ease of keeping the information secret; the downside was the difficulty you had in putting the information to use for your benefit.

Compare analog x-rays to digital x-ray images. You can instantly walk out of your doctor's office with a copy of the digital image on a thumb drive or have it emailed to you or put in a dropbox or share via some other way made possible by the internet. Thanks to this technology, you can now get a second opinion nearly instantly. Not only one, you could get two or three. And if everyone you contacted directly is stumped, you could post the image on the internet for everyone to see. Some doctor somewhere in the world may go, “ah, I have seen that before” even if “that” is incredibly rare. [Use Figure1 exampl]

This power comes at a price: Protecting your digital x-ray image from others who might wish to see it is virtually impossible. Every doctor who looks at your image could make a copy (for free, instantly and with perfect fidelity) and then send that to someone else of his or her choosing. The same goes for others who might have access to the image, such as your insurance company.

Now, critics will make all sorts of claims about how we can prevent unauthorized use of your image using encryption. But as we will see, those claims are hollow at best and dangerous if pursued to their ultimate conclusion (preview: you cannot have general purpose computing). So in summary: The upside of a digital x-ray image is how easy it makes it to get help; the downside is how hard it is to protect digital information.

But the analysis hardly ends there. The benefits that accrue to your digital x-ray image go well beyond just you. Imagine a huge collection of digital x-ray images all labeled with diagnoses. We can use computers to search through those images and get machines to “learn” what to look for. We know that such systems can be built [find study]. And these systems, because of the magic of zero marginal cost, can assist with and eventually provide future diagnoses for free. This, you may recall from the section on technological deflation in healthcare, is exactly what we want. It was impossible in the world of analog x-ray images, and it will continue to be impossible if each of us selfishly tries to lock up our digital x-ray images.

If we made all healthcare information public, we would dramatically accelerate innovation in diagnosing and treating diseases. At present, only large pharma companies can develop drugs, since only they have the money required to get many patients to participate in research. Many researchers are forced to join a big pharma company, leaving the results of their work protected by patents (part of the Trans Pacific Partnership negotiations have been around pharma companies ability to keep such information strictly for themselves). This situation recalls the music examples discussed earlier. The problem of trying to keep individual digital x-ray images private is the same as trying to DRM digital music files so that only the person who paid for it can play it. It is a technological impossibility (unless you want to ban all general purpose computing), and it deprives humanity of the benefits of sharing.

So why do I keep asserting the technological impossibility of assuring privacy or confidentiality? Don't we have encryption? A number of different problems exist that encryption doesn't and can't solve. The first is that encryption keys are also just digital information themselves, so keeping them under wraps confronts us with just another instance of the original problem. Transmitting your keys leaves them vulnerable to interception. Even generating a key on your own machine offers limited protection, unless you are willing to have that be the only key with the risk that any data you're protecting will be lost forever if you lose the device. As a result, most systems include some kind of cloud based backup and a way of retrieving a key, making it possible that someone will compromise your data either through technical interception or social engineering (i.e., tricking a human being to unwittingly participate in a security breach).

In most cases when supposedly private information becomes compromised, the thieves compel people to inadvertently reveal a password or to install some malware on their computers, for instance by downloading a piece of software that seems to do something useful. The computer of the doctor to whom you are sending your x-ray for a second opinion may have software running on it that sends all files on his computer to a third party, who can then access what's displayed on the screen. In order to view your file, the doctor of course has to decrypt it and display it, so this software will have access to the image.

Avoiding such a scenario would require us to lock down all computing devices. But that means preventing endusers from installing software on them and running all software through a rigorous centralized inspection process. Not only would locked down computing devices constrict innovation; they would also pose a huge threat to democracy and the knowledge loop. Someone else would control what you can compute, who you can exchange information with, and so on, in what would essentially become a dictatorial system. The Internet's entire premise as a global knowledge network hinges on enabling individual subnetworks and nodes to control their own computation.

If we can't really protect data, or if doing so means sacrificing the basic purpose of computing and networking, then what should we do? The answer, I think, is to embrace a post-privacy and post-confidentiality world. We should work to protect people, not information, allowing information to become public but sheltering individuals from the potential consequences.

Economic freedom via a Universal Basic Income represents an important first step to protecting people. If you were to lose your job over an information disclosure (maybe you had an affair and your employer thinks that's immoral), then at least you would still be able to secure your basic needs. Of course, a world of economic freedom and psychological freedom would decrease your chances of getting fired in the first place: When many more employees have walk away options, retention becomes much more important.

But, you might ask, what about your bank account? If that information were public, wouldn't bad actors simply take your money? They might, which is why we need to construct systems that don't just require a number that you have already shared with others to authorize payments. Apple Pay and Android Pay are such systems. Every transaction requires an additional form of authentication at the time of transaction. Two factor authentication systems will become much more common in the future for any action that you will take in the digital world. In addition, we will rely more and more on systems such as Sift Science, another USV portfolio company, that assess in real time the likelihood that a particular transaction is fraudulent, taking into account hundreds of different factors.

Another area where people are especially nervous about privacy is health information. We worry, for instance, about employers, insurers, or others in society discriminating against us because they've learned that we have a certain disease or condition. But here again, the economic freedom conferred by a Universal Basic Income would protect you from going destitute because of discrimination, and by tightening the labor market, it would also make it harder for employers to decide to systematically refuse to hire certain groups of people. Further, we could enact laws that require sufficient transparency on the part of organizations, so that we could better track how decisions have been made and detect more easily if it appears that discrimination is taking place. This combination of laws and freedoms would afford powerful protection while allowing the free flow of information that is currently “private.”

Many people contend that there must be some way to preserve privacy. I challenge anyone to create a coherent vision of the future where individuals, not governments or large corporations (such as Apple) control technology and where privacy or confidentiality remain secure. It just can't happen. Any time you leave your house, you are probably being filmed by someone's camera. Every smartphone has a camera these days, and in the future we'll see tiny cameras on tiny drones. Your gait identifies you almost as uniquely as your fingerprint. Your face is probably somewhere on the internet and your car's license plate is readable by any camera. You leave your DNA almost everywhere you go, and soon individuals will be able to sequence DNA at home for about 100 dollars. Should the government control all of these technologies? Should it level draconian punishments for using these technologies to analyze someone else's presence or movement? And if so how would those penalties be enforced?

The only view of the future that allows for freedom is one in which individuals retain control over technology, including general purpose computing. For technical reasons, such a world cannot accommodate our current notions of privacy and confidentiality. Yet we can adjust for that, and we have every incentive to do so. As I have pointed out, once we are willing to embrace such a world, once we feel comfortable releasing much of our data, we will reap huge benefits from that collectively. We will cure diseases. We will help end poverty. We will help fix the environment. All by enabling the knowledge loop to work much more efficiently and freely than it does today.

We should also remember that privacy is really a modern construct; by no means is it a precondition to a healthy, well-functioning society or to healthy, well- functioning individuals [Cite/add examples from Jeff Jarvis book here]. For thousands of years prior to the 18th century, most people had no concept of privacy. Many of the functions of everyday life, including excretion and reproduction, took place much more openly that they do today. Even today in rural areas, many people live perfectly well with much less privacy than is common in urban, industrial areas. You could regard the lack of privacy as oppressive, or you could see a close-knit community as a real benefit and source of strength. For instance, I remember growing up that if a member of our community was sick and couldn't leave the house, a neighbor would check up on him or her and offer to do the shopping or provide food.

If you want ample indication of how little entrenched privacy is in human nature, just look at what is happening today on the internet. Millions of people are making amateur pornography videos of themselves and sharing them with the world. Hundreds of millions more are publishing their most intimate thoughts and reporting their most mundane activities via social media. Cultural critics have decried such public displays as narcissistic, seeing it as a breakdown in civility. That's not the case. The internet has opened up new avenues for individuals to live in harmony with their deepest drives and instincts which include the desire to be social and to be recognized as an individual. These drives and instincts compel us to open up and communicate with others not just in private settings.

Observers such as 4Chan founder Chris Poole have worried that in the absence of privacy, individuals wouldn't be able to engage as fully and as freely online as they do today. Privacy, they think, helps people feel comfortable taking on multiple identities online that may depart dramatically from one another and from their “real life” selves. To me, this is a misguided argument. Emotional and psychological health derives not from a splintering or fragmentation of the soul, but the integration of different selves into a unitary but multi-dimensional personality.

But there is a different view. By keeping our various online selves separate, we allow for a lot of inner conflict to persist. We pay a price for this in the form of anxieties, neuroses, and other psychological ailments. It's far better to be fully transparent about the many sides of our personality than to cloister ourselves behind veils of privacy. [Look for psychological research backing this point] [Also provide examples from Stoic philosophers/ancient Greece. You don't need privacy for psychological freedom.]

If we can accept that privacy has become obsolete and even harmful in a knowledge society, the question remains: How will we get to a post privacy world? One way will be inadvertently through hacks and data breaches that abruptly expose data on millions of people. Another — and better way — will be through individuals opting into disclosing more of their information. For instance hundreds of people have already posted their Genome online and I am planning to do the same soon — I already have the files.

Many who argue against this post privacy view point out that oppressive governments can use information against citizens. People give examples such as the Nazis prosecuting homosexuals or the Chinese government prosecuting dissidents. Without a doubt preserving democracy and the rule of law are essential if we want to achieve a high degree of informational freedom.

At present even here in the United States many people feel they cannot trust the government. The erosion of trust has taken place over years as part of the impact of lobbying and capital on politics (see the earlier chapter on the self-conservation of capitalism). Large scale secret surveillance, as revealed by Edward Snowden, has further deteriorated trust. But if the net result of this winds up being a society that pits us (the citizens) versus them (the government) in a crypto battle then we will all lose. We will lose general purpose computing and we will eventually find ourselves in exactly the kind of dictatorship that we are seeking to avoid. More on this in the chapter on Democracy later.

results matching ""

    No results matching ""