웹검색 서비스와 ISP책임에 관한 소고
A Study on liability of Internet Service Provider in the Web Search Service
- 세창출판사
- 창작과 권리
- 2007년 봄호 (제46호)
-
2007.03101 - 120 (19 pages)
- 74
Search Engines such as Empas, Naver, Google and many others, operate essentially by sending out web crawlers or spiders which search the internet, gain access to publicly accessible websites, and then copy large portions of the World Wide Web into the search engines database on a regular basis, thereby creating an up-to-date index of other peoples works. This index created by the search engine has been described as a giant book containing a copy of every web page that the spider finds. Website owners who do not want their websites crawled can add a robot exclusion protocol otherwise know as a machine readable exclusion clause(exclusion clause) at the beginning of the webpage which will indicate to crawlers that the page may not be suitable for crawling, or the owner does not want the site accessed. Search engines based in Korea risk being technically in breach of copyright because copyright laws simply do not cover their ordinary activities. The copying of websites by search engines may be lawful as a result of an implied license provided by the website creator to the search engine operator. However, the strength of the implied license argument is unclear as it has not been tested at law, and will depend upon the particular circumstances of the case. The Court found that although the copying of images undertaken by search engine was a breach of copyright in the images, in light of search engine's use of the images, the reproduction and display of the images constituted a 'quotations from works made public'. Search engine's crawling allows new and transformative uses to be introduced as exceptions to copyright infringement in circumstances where they provide clear benefits to the public and do not interfere with creators' markets.
Ⅰ. 서론
Ⅱ. 웹검색 서비스의 개념과 크롤링의 필요성
Ⅲ. 크롤링의 정보통신망법상 문제
Ⅳ. 웹검색 서비스의 저작권법상 문제
Ⅴ. 결론
참고문헌
Abstract
(0)
(0)