Edited wiki page through web user interface.
This commit is contained in:
parent
7c5e78c1d1
commit
c491dcba73
|
@ -22,4 +22,11 @@ One alternative I want to try is using two DXT1 textures with the same mapping a
|
|||
tex t0
|
||||
tex t1
|
||||
lerp r0, t0.a, t0, t1
|
||||
}}}
|
||||
}}}
|
||||
|
||||
This approach is interesting, but according to Tom, it doesn't work that well in practice.
|
||||
|
||||
Instead he suggests doing something similar to Humus' compression, but using an uncompressed RGB-565 texture, and a separate luminance texture. The original texture is downsampled to 1/4th its size, and the luminance texture encodes the luminance difference between the original and the downsampled texture. The luminance deltas usually have a small range, so you could use a per texture scale factor to scale the deltas to the representable range, and obtain higher precision. Tom [http://home.comcast.net/~tom_forsyth/blog.wiki.html#%5B%5BTexture%20formats%20for%20faster%20compression%5D%5D explains it with more details on his blog].
|
||||
|
||||
The advantage of this method is that compression is very fast, the downsampled texture is only quantized, and the luminance texture is 1D, so it has a small search space. He has not done any PSNR analysis, thought.
|
||||
|
||||
|
|
Loading…
Reference in New Issue
Block a user