What I said in chapter 6 holds true regardless of your opinions on copyright-type issues. This chapter will try to build a new, more sophisticated ethical framework for thinking about copyright-type issues.
It is worth taking a moment to justify this.
There are some people who believe that copyright is obsolete, if it ever served any purpose, and that the only solution to our current problems is to simply do away with the concept of ``ownership'' of ``intellectual property'' entirely. Disregarding for the moment the philosophical reasons (partially because the philosophical foundation of this essay is laid out in chapter and I intend to treat those as axioms for the purpose of this essay), I would like to focus on the practical issues behind this movement.
This movement seems to be driven by the copyright abuses and excesses of the current intellectual property industries. The draconian decades-long copyright that artificially locks up our cultural heritage, the indignities foisted on us in the name of End User License Agreements (EULA), the foolish and easily abused DMCA (see justification of the ``foolish'' adjective in chapter 11.5) that is powered by copyright concerns... these things and many more are accurately diagnosed as problems with the system. It seems only natural that eliminating the system entirely will do away with these problems entirely.
Indeed, eliminating copyright entirely would eliminate these problems, but I think that solution throws out a lot of good stuff as well. While the focus is naturally on the monetary aspects of ``copyright'', and the abuses of the system made enticing by the prospect of profit, there are a lot of other important aspects that should not be discarded. First, there are the ``moral rights'', which are typically considered part of copyright. These include, but are not limited to:
Moreover, it remains common sense that by and large, the way to get people to do things is to allow them to profit by doing them. Thus, if you want a rich communication heritage, it stands to reason that allowing people to benefit from creating new messages will encourage people to create.
It is beyond the scope of this essay to discuss how long copyright should last. Such a determination should be made by an economic analysis of the costs and benefits of a given length. Personally, I feel very safe in saying that copyrights are way too large right now, possibly by as much as a factor of 10, but the fact that copyright is too long right now is really independent of the question should anything like copyright exist? I think a lot of the people who would simply abolish copyright entirely have come to that opinion by conflating those two questions. ``The current copyright duration is harmful'' does not imply ``The ideal copyright duration is zero.''
There are certainly others who argue for an ideal duration of zero on other grounds; while I don't buy those arguments myself, they are interesting and I respect those who come by those arguments honestly, rather then conflating the two questions.
Copyright law, as mentioned before, is built around a model where expression is the root unit of communication. To build a system for the future, we must create some other model that models communication... and the more-complete communication model described in 3 can again guide us.
In light of that model, it's fairly easy to break things down into two parts: the concrete part(s), and the human-received message. While they are very strongly related to each other, they are not the same thing and you must understand them both separately before their interactions can be understood.
The concrete parts of a given message are the parts of the message that can be adequately handled by modeling them as expressions. The reason for this is that they are effectively static expressions, changing only with the direct help from humans, which then makes new expressions.
Take the web page located at http://www.cnn.com/ . While the page that you view is quite dynamic, it consists of lots of little pieces that are static, like photos, news article intros, and headlines. Each of those individual pieces are effectively static bits of information, swapped out over time with other static bits of information, under the control of some computer program running at CNN. ``The page located at http://www.cnn.com/ '' changes extremely frequently, but the individual parts of it do not.
Also, the program that is knitting all these pieces together is itself static, with changes occurring only when a human intervenes. If you look somewhere, there is some code on some computer somewhere doing all this work. This program can be treated as a single concrete object. Even something as unstable as a search engine result page still consists of a database of processed web pages, which are static bits of data, and some program that assembles them, which has a discrete existence.
All communication, no matter how dynamic, must draw from some pool of static parts. The static parts may be very small, down to even individual letters or numbers (which themselves may not meet the creativity criterion for copyright), but they must exist somewhere.
The human-experienced message is the way the message is perceived by the human being in the model. In other words, the closest possible thing you can get to the actual human experience. After all, the whole point of communication, no matter what its form, is to stimulate the firing of nerve impulses in the human recipient's brain, which should be considered the ``most pure'' form of this idea.
It is probably OK in practice to step back one level from this ultimate destination, and focus on the representation that the human perceives and thus in practice refer to ``the browser-rendered web page'' or ``the television transmission reception'' without loss, as it is essentially impossible to discuss what happens inside of someone's brain. Nevertheless that is the fundamental effect that we are talking about and there are times when this distinction will matter.
An example of when it is better to use the approximation is when trying to determine whether or not a given person has consumed content. Practically speaking, if you order a pay-per-view movie and then go shopping while the movie is playing, you really don't have a basis for claiming an ethical right to a refund. While it is true in theory that you did not get what you paid for, there's no way one can expect the movie provider to check up on you and find out whether you really watched the movie. Indeed, you'd probably consider that intrusive.
Note we are not talking about ``the television program''; a given ``program'' may be wrapped in different commercials from run to run, and may have a wide variety of advertisements run across the bottom during the show, not to mention random editing cuts for content, fitting the screen, or blatant greed. We are concerned with the actual transmission, the actual images seen, not the ``logical'' (to use a computer term) content. I am not talking about ``the web page'', which may be rendered in any of umpteen thousands of variations by various browsers and their various settings; I am talking about the actual final images shown on the screen. I'm talking about as close to the human sensory organs as you can get.
Of course, this directly corresponds to the fundamental property of communication that ``Only humans matter.'' Who cares what your computer sees? Only what you personally experience really matters.
Compare figure 19 with figure 20. To understand the difference, consider the rights and responsibilities each pictured participant has. In figure 19, representing the traditional model, here's the rough breakdown:
In the new model, we have the following parts:
It is the Sender who ultimately has responsibility for making sure they have the rights to use the concrete parts as they intend to use them.
Note that I'm basically defining the Sender as the entity or group entity that has final say over what goes into the message. Given the number of concrete parts which can go into a message (``as many as you want''), each of which can have entirely seperate authors, this is the only definition that makes sense. There is always such an entity; if you're in a situation where you think there might be multiple such entities, there's actually several independent messages. For instance, you might be in a chat room with multiple participants, and it might look like all of the participants are responsible for the final participation. But in reality, each participant is sending seperate messages, which happens to be interleaved on the screen. This matches reality; if one participant says something insulting, we do not blame the others, because they have no control over the message the insulting participant is sending.
Now, the creator of the assembler (software author, machine manufacturor) may bear some responsibility for what is output if they build the assembler in such a way that it always includes, excludes, or modifies content in an unethical way. But ultimately it is still the Author's responsibility, because if there is no way to ethically send a message, they still have the option of not sending any message at all, so we need not concern ourselves too much with this possibility.
Since this is complicated, I would like to give some examples of what all these parts are.
Let's suppose you are reading this in your web browser, via that online version of this that I am providing. What do these parts correspond to?
A more traditional example: Suppose this had been published as a traditional book.
We can look at the old model in terms of the new model, both to better understand their differences, and where the old model fails to capture important nuances.
The old model can be seen as a special case of the new model, when the following conditions are true:
But look at what the old model loses. It has no way to represent any of the fancy messages we discussed in 6.3. It can't handle programs that dynamically assemble things from other messages. It can't handle the impact of various web browsers (``decoders'') on the message, because in the original expression model everything is intimately tied with its inherent physical representation.
As alluded to previously but not said directly, this model is still tolerably useful when the relationships are all static. For example, the individual article text for the stories CNN runs is still handled decently by the old model. They do not change over time and typically consist of some static set of quotes from sources, writing from CNN correspondants, and other such static material. Where the model breaks down is when programs start dynamically knitting the static parts together. It's impossible to handle the homepage of CNN.com as an expression because it's effectively impossible to nail down just what ``the homepage of CNN.com'' is. The concrete part and human-perception model works because the problem is broken down: Concrete parts (very similar to expressions) for the part of the message that works well on, and a human-perceived message for each individual experience of the homepage by a person. Separately, they can be treated sensibly.
This separation gives us a much-needed degree of flexibility regarding legislation. It allows one to make laws concerning just the concrete parts, or just the use of concrete parts in human-recieved communication, without needing to rule on the whole communication at once, which is very difficult and tends to overflow into other domains.
For instance, there are laws on the books regarding compulsory licensing of music, setting fair rates and methods of collection. Such laws have become a hindrance lately because of the way the expression concept conflates content with delivery. Thus, there is no one rate set for music licensing, nor even the possibly-feasible rate of licensing given an intended use (background to a radio news report, business Muzak playing, education and criticism, etc.). Instead, the rates are set for music given a certain distribution method, which is to say, the exact mechanics of how the message of the music is sent. Unfortunately, there are a wide variety of ways of distributing music, and subtle variations on each one. One recent example of this is how the law has handled the case of streaming music over the web, which went very poorly and upset lots of people.
Using this model, we can concentrate on just the important questions regarding the two ends of communication. What music was sent in a message, how was it used, and who received it? The exact manifestation of the ``expression'' is not what matters. It doesn't matter if the user is listening on the radio, or listening to a CD recording of some radio broadcast. What matters is how many people heard it, what the music was used for, and what music it was. Trying to enumerate all the delivery methods is doomed to fail, so don't try.
This drastically reduces the number of special cases in the law by eliminating the need to consider the large and rapidly increasing number of different media for delivery of a message, and would correctly handle an entire domain of concrete content, no matter what transmission methods or other uses are imagined in the future. So even though the new model is more complicated then the expression model, its ability to more accurately reflect the real world move us closer to all of the goals of simplicity, completeness, robustness, and usefulness (chp. 10).
The other half of the problem is of course how to handle these human-received messages ethically and reasonably intuitively. Fortunately, it's easier once we abstract the concrete parts away.
The human-experienced message is much more complicated then the mechanics of manipulating concrete parts. As a result, it is worth its own chapters, such as the chapter on message integrity. But I can give at least one example of a purely ``human experience'' issue.
There's something called look-and-feel in computer user interfaces, or more generally just the idea of style. When Apple sued Microsoft because Microsoft Windows was too similar to the Macintosh OS, it was strictly a matter of human perception of the software. One of the reasons this case was so controversial is that it was one of the first cases dealing solely with human perceptions, where the flaws of the expression doctrine were painfully obvious. Apple was not accusing Microsoft of ripping off any of the concrete parts it owned: Microsoft did not steal code, they did not steal any significant amounts of text (a few menu commands like ``Edit'' or ``Paste'' can hardly be worth a lawsuit), and they did not steal graphics. The graphics were certainly inspired by the Apple graphics, but no more so then any action movie car explosion is inspired by another; similarity is enforced by the similar functions and you can't reasonably claim all graphics that bear any sort of resemblence to your own work. Yet Apple contended that the net result was that the human being using experience of using Windows was too much like the experience of using Macintosh to be legal, that there must be some form of infringement taking place.
It does make some sense that there might be some infringement here, even without any concrete parts being stolen, but it is much more difficult to quantify damages or draw boundaries delimiting who owns what. Another ethical complication is that there is frequently societal benefit to such style copying. We no longer even think about many things that are now very standard styles, such as the beginning of books (cover page, detailed info page, dedication, one of a handful of table-of-content styles, all of which are highly standardized across all companies), the looks of late-generation models of any particular device (there tends to be a lot of convergence into one ``final'' look for the device; consider the near-uniformity of television designs as the design is dominated by a display area of a certain size, and little else, compared to the era when a television was a small part of a larger cabinet), and other such things. There is great benefit in standardizing on interfaces for all sorts of things, not just software, to reduce the time needed to learn how to use the multitude of objects available to us now.
An expression can be physically possessed. A communication can not. In the case of something like a book, what is possessed is merely one incarnation of the communication, not the communication itself. So it's not surprising that the First Sale doctrine is coming under attack. Yes, there are obvious monetary motives behind the attacks, but the whole idea of a First Sale doctrine critically depends on the world only consisting of expressions. Even without the monetary motivation, the doctrine was doomed to fall anyhow.
It is not necessarily the case that the only possible outcome is that no sharing or ownership is ever allowed, though. For instance, there's no need to restrict a person's right to record a given communication. Indeed, there are many practical reasons to consider such a right necessary. While I would not want to go so far as to call it a right, demanding that a customer only be allowed to receive some communication within a certain time frame, even if they have the technical ability to shift that time frame, is just pointlessly jerking the customer around; it may satisfy the sender's need to control things but it's nothing but a harm to the consumer with no conceivable benefit to society at large.
It is also possible to work out ethically valid ways of sharing a message. The idea that Tom and Fred watching a pay-per-view movie together is ethically OK, but it's wrong for Tom to tape the pay-per-view movie and giving it to Fred for one viewing, is silly. The effect is the same in both cases. The opposite extreme, copying the movie to a computer and allowing the world to download it at will is also obviously a bad idea (even if you don't buy the economic arguments for that, it effectively destroys the moral rights of the authors), but there are intermediate possibilities. Consider a legitimate DVD being allowed to make unlimited copies for his immediate family (or cohabitants, or X people, any limited group), perhaps at some quality loss, but not allow anyone to copy the copies.
There is no way around the fact that the guidelines will be fuzzy and subject to judicial review. In no way does that obligate us ethically to believe that the draconian measures that Hollywood is pushing for are the only solution. Fuzziness is part of life.
One non-copyright-based example of a something that has already been shaken up by resting on a faulty foundation is the academic concept of citing. How do you cite a web page correctly, which could be evanescent by its very nature? Guidelines have of course been set forth by the various style committees (for instance, see http://www.apastyle.org/elecref.html). But even a specification of a URL and a time stamp is not always sufficient to specify a web page precisely enough for another user to see what is being cited; the page may be personalized to the specific user or even have totally random content, with no way to cause specific content to appear.
The problem here lies of course in that the citation is trying to cite the human-experienced message itself, which is too transient to make a good citation target. In theory, the solution is to cite the specific static content items that they are referencing, including how they were assembled. In reality, that is not feasible, because there is most likely no way to specify the content items directly, let alone how they were assembled.
In the absence of a strong promise to maintain content at certain URLs in perpetuity with a guarantee of no change, a good promise for an online journal to make, the only solution to this problem is for the academic to save their own copy of the web page they wish to reference, and reference their saved copy instead, again using this as a reasonable approximation to saving the human-experienced message. Here we see an example of where fair use ought to be strengthened in the modern environment, because without the right to do this for academic purposes, there is no rational way to cite things on the web. I accept it as axiomatic that we want academic discourse to continue, even over the potential objections of copyright holders.
The need to save archive copies for academic citation implies the technical ability to so save the content. No Digital Restrictions Management7 system that I've seen pays more then lip service to this. People need the right to convert transient messages into their own concrete representations and archive them, to the extent that it is technically possible.
Of course once that is granted there's every reason to extend that to the general case of requiring that all content people can experience can be relatively easily archived by the recipient for their personal use, and potentially other limited uses such as the aforementioned academic citation use.
One seemingly clever attack on this model that is to write a program that can output every possible web page in theory, then claim that you have rights to all web pages in the world because your program could theoretically generate them. I know someone will email me this if I don't address it here. This is wrong in at least three ways:
Under this formulation, something very like copyright law applies to concrete parts. Current copyright law requires that an expression must concretely exist before it can be protected. In light of the previous section, we can further clarify this to say that in order to exist in the eyes of the law, a copyrighted work must exist in a tangible form experiencable by a human. Until it is experiencable by a human, it is neither protected, nor can it constitute infringement.
This is a direct consequence of ``Only humans communicate.'' A copy of a document on a hard drive is not itself a tangible form experiencable by a human. We lack the sensory apparatus to directly experience the magnetic changes on the hard drive or the electric currents that it uses to communicate with computers. Only when the document is rendered to the screen does it become experiencable by a human. This more closely matches our intuition of when infringement occurs.
Going back to a previous example I used of a hacked server being used to serve out illegal copies of software, this helps us understand how we can rationally not hold the server owner responsible. Assuming the owner never uses the software on their hard drive, the owner has not committed any copyright violation. Yes, illegal software is sitting on their hard drive, but who cares? The owner of the hard drive is not experiencing the content.
As a bit of a tangent, I wouldn't even recommend trying to charge the owner with contributory infringement; perhaps someday we will be good enough at writing secure software that we can hold a server owner responsible for everything that happens on that server. But at the current state of the art, where software still routinely has all the structural integrity of swiss cheese, there is never any way to reasonably guarantee that a computer can not be misused.
Another example: The mere act of downloading anything, be it software, music, a document, whatever, is not intrinsically unethical (abstracting away potential second-order effects like using bandwidth somebody else paid for inappropriately). Until the content is experienced, the mere copying is a null event.
The Expression Doctrine is dead. It is already useless, in the sense that it produces no answers to modern questions. One way or another, it is going to replaced with something. The question is whether it will be something ad-hoc, or well-principled.
We see that there is a well-principled alternative to the Expression Doctrine, based on a more reasonable understanding of the way communication works. By separating the concrete parts of the message from the dynamic parts of the message, and handling them separately, we can create principles that are much more useful. Laying down the exact principles is a matter of law, but I show how they can be constructed by showing some examples of how these principles make it possible to make rational laws, such as in the field of compulsory music licensing.