Blog scraping

Blog scraping is the process of scanning through a large number of blogs, usually through the use of automated software, searching for and copying content. The software and the individuals who run the software are sometimes referred to as blog scrapers.

Scraping is copying a blog, or blog content, that is not owned by the individual initiating the scraping process. If the material is copyrighted it is considered copyright infringement, unless there is a license relaxing the copyright. The scraped content is often used on spam blogs or splogs.

Issues

A blog scraper who gathers content that is copyrighted material is considered in violation of the law. Blog scraping can create problems for the individual or business who owns the blog. Blog scraping is particularly worrisome for business owners and business bloggers. Scrapers can copy an entire post from an independent or business blog. The duplicated content will include the author's tag and a link back to the author's site (if that link appears in the author's tag). However, most blog scrapers copy only a portion of the content that is keyword-relevant to their splog topic. By doing this, the keyword relevancy of the scraper's site is increased. Secondly, by not scraping the entire post, any outbound links are eliminated which means their search engine ranking is not reduced.

Additionally, scraped content can appear on literally any type of splog or RSS-fed spam site. This means an unsuspecting individual could find their creative or copyrighted material copied onto a site promoting pornography or similar type of content that may be offensive to the original author and his/her audience. This may be damaging to the original author's reputation.

External links

This article is issued from Wikipedia - version of the 11/8/2012. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.