Monday, August 12, 2019
Compression Algorithms Research Paper Example | Topics and Well Written Essays - 1750 words
Compression Algorithms - Research Paper Example This process of size reduction of data is popularly known as compression of data, though it was formally known as source coding. Compression is important as it aids in cutting down the use of resources, like space of data storage or capacity of transmission. As compressed data should be decompressed in order to use, the extra processing computation or costs that arise from decompression, the situation differs far from free lunch. Algorithm compression is likely to be subjected to a trade off of time space complexity. For example, a video compression scheme needs a costly hardware to decompress the video with speed for it to be observed during the decompressing process. Opting for decompression of the video before watching may be of inconvenience or may need additional storage. Data compression design schemes entail tradeoffs amid various factors, inclusive of compression degree, distortion introduced and required computational resources to uncompress and compress the data. There are new options for traditional systems that sample fully then compress providing effective usage of resource based on compressed sensing principles. Compressed sensing methods circumvent the requirement for compression of data choosing from a selected basis. Origin The compression is either lossless or lossy. ... Compression is important as it aids in cutting down the use of resources, like space of data storage or capacity of transmission. Algorithm compression has played an important role in IT from the 1970s. During this time, internet was growing in its popularity and there was invention of Lempel-Ziv algorithms. The Lempel-Ziv algorithm unfortunately, has a stretched history in non-computing. The earliest invention of compression algorithms is the Morse code that took place in 1883. It involves the a compression of data entailing common letters found in English like t and e which are allocated Morse codes that are shorter. Later, when mainframe computers started taking hold in the year 1949, Robert Fano and Claude Shannon invented coding that was named Shannon-Fan. Their algorithm allocates codes to cipher in a specific data blocks based on likelihood of occurrence of the symbol. The probability being of one symbol occurring is indirectly proportional to the code length which results to a shorter means of representing data (Wolfram, 2002) After two years, David Huffman as he studied information theory shared a class with Fano Robert. Fano issued the class with the option of either taking final exam or writing a research paper. Huffman made for the research paper that was on the topic of working out on the most effective binary coding method. After a research carried out for months that proved not to be fruitful, Huffman almost gave up on the work to study for a final exam to cover for the paper. At that point is when Huffman got an epiphany, building a technique that was more efficient yet similar to the coding of Shannon-Fano. The major difference between Huffman and Shannon-Fano is in the later is there is a bottom-up built
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.