This programmability was nothing short of liberating. Suddenly, a single OpenGL 2.0 implementation could simulate realistic water surfaces with dynamic reflections, create cel-shaded cartoons with hard-edged lighting, or render soft shadows using percentage-closer filtering. The era of “shader effects” began, and with it came a Cambrian explosion of visual techniques. Games like Doom 3 (2005) and Half-Life 2: The Lost Coast showcased the power of per-pixel lighting and normal mapping, techniques that relied heavily on the programmable shaders standardized by OpenGL 2.0.
Inevitably, the march of progress left OpenGL 2.0 behind. The release of OpenGL 3.0 in 2008, and more aggressively OpenGL 3.1 in 2009, declared the fixed-function pipeline and immediate mode as deprecated. The API pivoted entirely toward a programmable, shader-only model. This broke compatibility with OpenGL 2.0’s comfortable dual nature but was necessary for efficiency and modern GPU architectures. Yet, for many years, the vast majority of consumer hardware and games targeted OpenGL 2.0 (or its direct competitor, DirectX 9) as the baseline. opengl2
Before OpenGL 2.0, the OpenGL pipeline was a fixed-function machine. Developers could configure states, lights, and materials, but the transformation of vertices and the coloring of fragments were performed by opaque, driver-controlled hardware. This provided predictability and simplicity but at a great cost: visual creativity was limited to what the fixed hardware allowed. To achieve a custom lighting model or a non-photorealistic effect, programmers had to resort to cumbersome workarounds, often using multiple passes or abusing texture combiners. This programmability was nothing short of liberating