941
Technology / Re: What’s the latest thing in image compression ?
« on: 01/10/2007 23:50:45 »
As an interested observer, it seems that fractals never really made it to the big-time - I think the major issue was that compression is extremely computationally demanding.
JPEG2000 is a newer enhancement of the JPEG standard which promises to solve some of the JPEG problems (degrades more gracefully at higher compressions, without "blockiness" artifacts), but on the web at least, its benefits don't seem to outweigh the fact that JPEG2000 is a much less universal currency. I believe JPEG2000 is based on wavelet coding, and degrades to a more natural "blurriness" if over-compressed.
It's worth remembering that the JPEG standards specify exactly how a compressed image-file is to be reconstructed, but the psychovisual model used to decide what image-data to keep and what to discard during the compression process is open. In my experience, modern JPEG compression algorithms and visual models seem to a achieve "near-visually lossless" compression with about half the filesize of algorithms from 10-15 years ago. My gut feeling is that we've probably got as far as we can with DCT-based compression systems. JPEG is so universal and entrenched that some new standard would have to be markedly better to actually catch on in the mainstream.
I suspect most new research is on compression of video, where further advances in motion-analysis and compensation will gain much more compression-advantage than minor tweaks to static-image algorithms.
JPEG2000 is a newer enhancement of the JPEG standard which promises to solve some of the JPEG problems (degrades more gracefully at higher compressions, without "blockiness" artifacts), but on the web at least, its benefits don't seem to outweigh the fact that JPEG2000 is a much less universal currency. I believe JPEG2000 is based on wavelet coding, and degrades to a more natural "blurriness" if over-compressed.
It's worth remembering that the JPEG standards specify exactly how a compressed image-file is to be reconstructed, but the psychovisual model used to decide what image-data to keep and what to discard during the compression process is open. In my experience, modern JPEG compression algorithms and visual models seem to a achieve "near-visually lossless" compression with about half the filesize of algorithms from 10-15 years ago. My gut feeling is that we've probably got as far as we can with DCT-based compression systems. JPEG is so universal and entrenched that some new standard would have to be markedly better to actually catch on in the mainstream.
I suspect most new research is on compression of video, where further advances in motion-analysis and compensation will gain much more compression-advantage than minor tweaks to static-image algorithms.