These days, with all of the information that is readily available on the internet, and many students opting to obtain education from online colleges and universities, the prevalence of plagiarism is on the rise. In a 2005 study conducted by the Center for Academic Integrity (www.academicintegrity.org), it was concluded that 40% of the 50,000 undergraduates asked admitted to having plagiarized from the internet. This is a very large jump over a span of six years, from only 10% in 1999 (Badke, 2007). It is becoming clear that educators, as well as students, need to become more familiar with what plagiarism is, what constitutes it, and how it can be avoided in order to ensure students are getting the most out of their online learning
…show more content…
Detection software products use various methods to accomplish and analyze papers written by students. The processes used by these programs include text matching, indexed sources, and style analysis of content (Kennedy, 2006). Each of the methods used to detect plagiarism has advantages and disadvantages. However, all are similar in that they attempt to detect plagiarism after it has been committed (Kennedy, 2006). Text matching software searches the internet looking for matches in words with indexed sources (Kennedy, 2006). Style analysis looks at the style in which a paper is written and then compares that style with work available on the internet (Kennedy, 2006). Both of these methods also compare phrases to help detect plagiarism (Kennedy, 2006). Detection technology based on matching text and style analysis from previous papers written by other student and the Internet has inherent limitations and does not always work (Johnson, Patton, Bimber, Almeroth, & Michaels, 2004).
There are many reasons why detection software may not work. There are three reasons that are the most common as to why plagiarism is not detected. The first reason is the web sources cited may have been removed from the internet between the time it was cited in the paper and when the paper was checked (Kennedy, 2006). The second reason is there is no set image of the web. This means the web is ever changing and the software can lag behind the current state of the