As Web 2.0 further extends our communications options, a battle is brewing between copyright owners and the search engines.
Search engines are typically seen as an asset to Web site owners, who typically vie for high search-results ranking in order to raise visibility and drive traffic to their sites. But some Web site owners claim their copyright is being violated when their material is cached, electronically copied and stored by search engines such as Google or Yahoo! By accessing the cached copy rather than re-fetching the original page, the access time is shorter, but caching is done without permission of whoever created the original file.
This conflict between copyright owners and search engines is of particular interest to Kathy Olson, associate professor of journalism and communication, who researches copyright law and the issues surrounding the fair use doctrine. Olson says the Internet has resulted in an explosion of copyright conflicts as owners of intellectual property try to protect their rights online. Digital technology poses a greater threat than technological advances in the past because text and images can be reproduced not just once, but an infinite number of times without copies degrading and losing their quality.
“Copyright owners are trying to find new ways to protect their rights, such as contracts, licensing, and digital rights management. That’s a problem, because when you lock up the work, things like fair use are completely shut out,” says Olson. “As readers, we have a right to look at an author’s article and use it or quote small parts of it. That tension between copyright owners and people who need to use the copyrighted work has to be addressed.”
Copyright lies at the heart of every Web index. When Web surfers look for a needed story or image, every search produces a thumbnail or cached article that is created by copying the image directly from the original source without express written consent of the author. At the same time, the fair use doctrine generally allows some copying, without permission, when it is done for socially beneficial uses.
“The fair use doctrine is too variable and unpredictable to give search engines the safe harbor they need to do their jobs.”
Olson has researched cases in which the courts have examined the question of whether search engines can conduct this copying under fair use. So far the courts have ruled in favor of search engines, and Olson agrees with the outcome, if not the courts’ reasoning. While search engines need leeway and protection for necessary copying, she argues, the fair use doctrine is not the best way to do it. Because the fair use doctrine looks at a number of case-specific factors in deciding whether the particular use is fair, it is better suited to traditional uses where a case-by-case analysis is necessary.
“Search engines are so integral to using the World Wide Web, and harnessing the information on it, that they must have much broader permission to do this kind of necessary copying,” Olson says. “The fair use doctrine is too variable and unpredictable to give search engines the safe harbor they need to do their jobs.” Olson calls for an alternative to a case-by-case analysis of fair use for search engines, such as blanket statutory protection or acknowledgment that search engines have an implied license to copy Web pages for indexing. “When you put something on the Web, it’s up to you to allow or block search engines from finding and indexing your page. If you don’t opt out, it’s going to be copied by search engines. Because the technology exists to block these searches, there should be almost a shifting of the burden from asking for permission to copy to denying permission.”