The curators looked inside the model and saw a jungle of mathematical weights—over 1 billion parameters. But many were duplicates or near-zero values. Pruning was like trimming a bonsai tree. They surgically removed the weakest connections. A neuron that never fired? Gone. A weight that was always 0.00001? Deleted.
Imagine a painter who used to mix colors with a microscale. Switching to fp16 is like using a standard teaspoon. The result is 99% the same, but the painting loads twice as fast and uses half the GPU memory. On an RTX 3060, fp16 turned a 10-second generation into a 5-second one. v1-5-pruned-emaonly-fp16
Think of it like a brilliant but unorganized artist who carries three identical paintbrushes, a sketchbook of half-finished ideas, and wears heavy steel armor while trying to paint. The model weighed over 5 gigabytes. Running it on a standard laptop was like asking a bicycle to haul a grand piano. The curators looked inside the model and saw
Now came the magic trick. Normally, the model stored numbers in fp32 (32-bit floating point)—very precise, like measuring a hair’s width with a laser. But for image generation, you don’t need that level of precision. fp16 uses 16 bits—half the storage, half the memory bandwidth. They surgically removed the weakest connections
And that is how a clunky genius became a nimble masterpiece.