COBE: A 5kB WebGL Globe
This blog post was originally published as a Tweet thread.
First of all, today's popular WebGL libs are usually sized at hundreds of kilobytes, but there are also lightweight choices such as http://github.com/vaneenige/phenomenon and http://github.com/xem/W. All of them are wrappers on top of the WebGL standard APIs, but with different feature sets.
Earlier this year we were using a Three.js based solution. The globe is a SphereGeometry that loads the full world map JPEG as texture. Every visitor will spend ~40kB loading that image, and it still felt blurry so we couldn't optimize it further.
We tried to set a higher texture anisotropy (http://threejs.org/docs/#api/en/textures/Texture.anisotropy) which improved it a bit, but that's still a trade-off between performance and quality.
Some day I read this great post by GitHub: How we built the GitHub globe. They're not rendering an image, but thousands of dots, on the globe. So a lot of information loss (size win!): no more sharp edges and rich detail, but still looking amazing.
But one bottleneck in that GitHub approach is, they have to reduce the number of dots from ~12,000 to ~8,000 to keep it fast, because they are using a loop to generate those samples. However the work here can be parallelized. The idea of using a shader came to my mind naturally.
To start with that idea, let's create a shader to draw a sphere, and a 4096×2048 world map as the texture (80kB):
color = isOnLand(x, y) ? light : dark
Then we can render some kind of lattice on the globe, like this a basic one:
color = isOnDot(x, y) ? light : dark
By multiplying them, you get a dotted world map:
color = isOnLand(x, y) && isOnDot(x, y) ? light : dark
It's interesting that, thanks to the sampling, if you downscale that texture from 4096×2048 (80kB) to 256×128 (1kB), the dotted world maps look almost the same! Since it's small enough, I just inlined it in the lib as a base64 string:
You might noticed that the sampling doesn't feel good especially near the North Pole. That's because we are evenly putting samples by longitude and latitude, not by density. A better way is Spherical Fibonacci Lattice, example by @mbostock:
There's also an excellent animation by @cristobalvila showing the math behind this:
But one blocker was, to use a shader we have to compute the nearest point in the Fibonacci lattice from the current coordinates. Whereas most implementations today are producing these points kinda "unpredictably", which makes it difficult to render on a GPU.
Then I saw http://shadertoy.com/view/lllXz4 created by @iquilezles, which implemented an algorithm to map a point on a sphere to its closest Spherical Fibonacci point! Here's the paper if you are interested: http://dokumen.tips/documents/spherical-fibonacci-mapping-fibonacci-mapping-benjamin-keinert-1matthias-innmann.html.
So now we finally got everything ready. The sampling looks good with the new algorithm:
We also got rid of the precision error by rounding before sin/cos as well as @farteryhr's amazing mantissa trick, if you want to know more about that part:
Performance improvement: https://twitter.com/shuding_/status/1467087244464959490 ↩