It turns out this fundamental property is the key to robustness. If we had to pay attention to the actions of every computer and router and device between the sender and the receiver, we'd never be able to sort out the situation with any degree of confidence. The modern Internet is quite complicated, and some innovative and complicated ways of communicating have been developed. As we try to apply older models to these issues, the complexity explodes and we are left unable to determine anything useful about the situation.
But we don't need to worry about the actions of every router and program between the receiver and the sender. All we need to worry about are the results, and who is on the chain of responsibility. The technology is unimportant.
For instance, recall the example in chapter 2, where I talked about the Australia considering requiring licenses for streaming video over the Internet. A bit of thought revealed the complications inherent in the issue: What if I don't stream, but provide downloadable video? What if I send chunks that are assembled on the user's computer and claim I never actually sent any actual video, just some random chunks of numbers? What if I just want to stream video as a 1-to-1 teleconference? If you define the problem in terms of what the machines are doing, then any attempt at law-making is doomed to failure, because there's always another way around the letter of the law. Instead, this ethical principle says follow the effects. If it looks like television, where you are in any way making video appear to many hundreds or thousands of users reasonably simultaneously, then call it television and license it. I don't care if you're mailing thousands of people CDs filled with time-locked video streams, bouncing signals off the Moon, or using ESP. After all, Australia really only cares about effects; the tech is just a red herring. On the other hand, Grandma emailing a video, or 1-to-1 teleconferencing, is obviously not television. The television commission should then leave it alone.
Of course there would be details to nail down about how exactly one defines "television" (remember all those axes in the communication history section?), but that's what government bureaucracies are for, right? "Scale of people reached" is a natural axis to consider in this problem. I'm not claiming this provides one unique answer, but actively remembering the principle that only humans communicate provides a lot of very important guidance in handling these touchy issues, and makes it at least possible to create useful guidelines.
In fact, I submit that no ethical system for communication can fail to include this as a fundamental property. Not only is it nonsensical to discuss the actions of computers in some sort of ethical context, I think it would be impossible to create a system that would ever actually say anything, due to huge number of distinct technological methods for obtaining the same effects. Consider just as one example the incredibly wide variety of ways to post a small snippet of text that can be viewed by arbitrary numbers of people: Any number of web bulletin board systems, a large number of bulletin board systems over Telnet, Usenet, email mailing lists with web gateways, literally hundreds of technological ways of producing the same basic effect. Yet despite the near-identical effect produced by those technologies, if we insist on closely examining the technology, each of those has slightly different implications for who is hosting the content, where the content "came from", who is on the chain of responsibility of a given post, etc. Are we going to legislate on a case by case basis, which even in this small domain is hundreds of distinct technologies? The complexity of communication systems is already staggering, and it's not getting any easier. On the whole, we must accept this principle, or effectively admit defeat.