Theoretical Results for Applying Neural Networks to Lossless Image Compression
dc.contributor.author | Steve G Romanuik | en_US |
dc.date.accessioned | 2004-10-21T14:28:52Z | en_US |
dc.date.accessioned | 2017-01-23T07:00:41Z | |
dc.date.available | 2004-10-21T14:28:52Z | en_US |
dc.date.available | 2017-01-23T07:00:41Z | |
dc.date.issued | 1994-03-01T00:00:00Z | en_US |
dc.description.abstract | The ability to employ neural networks to the task of image compression has been pointed out in recent research. The pre-dominant approach to image compression is centered around the backpropagation algorithm to train on overlapping frames of the original picture. Several deficiencies can be identified with this approach: First, no potential time bounds are provided for compressing images. Second, utilizing backpropagation is difficult due to its computational complexity. To overcome these shortcomings we propose a different approach by concentrating on a general class of 3-layer neural networks of 2(N+1) hidden units. It will be shown that the class ${\cal N}^{*}$ can uniquely represent a large number of images, in fact, growth of this class is larger than exponential. Instead of training a network, it is automatically constructed. The construction process can be accomplished in ${\cal O}_{Worst}(n) = n^{4} - n^{2}$ time, where $n$ is the image size. Obtainable compression rates (lossless) exceed 97\% for square images of size 256. | en_US |
dc.format.extent | 215433 bytes | en_US |
dc.format.extent | 74367 bytes | en_US |
dc.format.mimetype | application/pdf | en_US |
dc.format.mimetype | application/postscript | en_US |
dc.identifier.uri | https://dl.comp.nus.edu.sg/xmlui/handle/1900.100/1361 | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartofseries | TRC3/94 | en_US |
dc.title | Theoretical Results for Applying Neural Networks to Lossless Image Compression | en_US |
dc.type | Technical Report | en_US |
Files
License bundle
1 - 1 of 1