Main Page | Class Hierarchy | Compound List | File List | Compound Members | File Members | Related Pages

CanvasGL Class Reference

Inherits Control.

List of all members.

Detailed Description

This creates a frame which holds an OpenGL context.

When using this class, you need to include an additional library in your commands, the "opengl32.lib" library, unless if you use dig (which can detect this usage). A very, very small version of it is copied automatically into your lib directory. In order to reduce dependencies, the CanvasGL header is not imported by dig; use "import diggl;" as well.

Color index mode is not supported either in the contexts or in the API.

The GL class and gl singleton wraps some of OpenGL's functionality in a more convenient wrapper. It's not slower; everything is in final methods that will like as not be inlined. I make minimal design choices: multipurpose functions like glFog are split into component methods, there are no prefixes for either methods or names, and pointless separate functions like glDepthMask are coralled into enable/disable/isEnabled. The texturing API is completely different and pulls everything into a "texture" prefix, likewise "list".

One guiding factor of the API is a lack of buffered state; thus it can interoperate with any other OpenGL code, and it keeps it focused upon providing a wrapper, not a framework.

The Depth Buffer

OpenGL state includes the depth buffer, a buffer that holds the individual depth of each pixel on the color buffer. CanvasGL asks for a depth buffer with 32-bit precision; in modern hardware this will be provided, but old hardware might give 16-bit. Lower precision increases z-fighting, where the intersection between two polygons start shivering as one polygon or the other is seen as closer.

Initial GL state doesn't use the depth buffer and must be enabled with #enable (DEPTH_TEST).

To clear the depth buffer, use #clear (DEPTH_BUFFER_BIT).

The value that the depth buffer is cleared to is controlled by #clearDepth, which is initially 1.

Once we have the depth for a fragment we still need to scale it into the depth buffer so that it can be used for comparisons. This is controlled with the #depthRange function.

The depth function is used to determine whether any given fragment wins a contest with the previous depth fragment. This is controlled by #depthFunc. The initial function is LESS.

Once the point wins the contest it may be written depending upon the value of #enable (DEPTH_WRITEMASK), which is true by default.

The most important way to maximise depth buffer usage is to keep the ratio of the far plane and the near plane as low as possible. The higher it is, the more depth buffer precision is lost. The two planes are controlled by #frustum.


By default, lighting is disabled, thus causing everything to be drawn fullbright. When enabled with #enable (LIGHTING), lighting is multiplied to a color after texturing but before fogging. While lighting is disabled, #color indicates the color of an object; after lighting, the material parameters are used.

GL offers two ways to access lights. You can use the lightXXX method, which takes a light index (light indices start with LIGHT0 and are sequential), or use the #getLight method and use the methods on the returned GLlight object. They have identical parameters outside of the first index. The number of lights the GL supports can be retrieved using #maxLights ().

Lights must be enabled individually using #enable (LIGHTi), or with the enable method on the GLlight. No lights are initially enabled.

Then you can change the diffuse color, with #lightDiffuse/diffuse. This is the color that the light reflects off an object in all directions; a diffusely lit sphere has a warm glow that fades to black as it faces further from the light. By contrast, the specular color (#lightSpecular/specular) is the reflection of the light off the sphere. This assumes that the light is circular. The ambient color (#lightAmbient/ambient) controls the light's brightness when it's not shining on the object. The position (#lightPosition/position) is where the light is located. When you set this value, it is first multiplied against the modelview matrix.

Then there's attenuation, which is how the effect of the light fades over a distance, and is the inverse of the addition of three factors: constant attenuation (distance has no effect), linear attenuation (doubling the distance halves the brightness), and quadratic attenuation (doubling the distance quarters the brightness), playfully named #lightConstantAttenuation/constantAttenuation, #lightLinearAttenuation/linearAttenuation, and #lightQuadraticAttenuation/quadraticAttenuation. By default these values are (1, 0, 0), resulting in no attenuation. Real life has the factors (0, y, 1) where y is the atmospheric effect, but this can be hard to model with, so it is more common to use linear attenuation. Note, however, that you are free to use (0, 0, 0.1) or lower to change the lower limit for the light's attenuation. You can set all three parameters simultaneously using the #lightAttenuation/attenuation methods.

Spotlights are another form of light that project only a certain angular range. Like normal lights, OpenGL spotlights are vertex only; this means that if an object such as a street is not sufficiently tesselated, the spotlight might look rough or even be invisible. This limitation makes it tricky to use OpenGL spotlights, but they are available.

The first spotlight parameter is #lightSpotCutoff/spotCutoff. When 180 (the default), this is a normal diffuse light; otherwise it is the spread angle of the spot, from 0 to 90 degrees. 90 degrees results in a half-sphere of light, for example, while 1 degree results in a small circle.

Then there's #lightSpotExponent/spotExponent, which controls the falloff. As a given point approaches the spotlight's edges, it gradually approaches black. When the exponent is 0, this approach is linear. When the exponent is 128, this is a sudden sharp line at the edge of the cutoff.

Finally, there's #lightSpotDirection/spotDirection, which is the normal of the light's direction. It is transformed by the inverse of the modelview matrix.

Material Parameters

What effect this all has on an object depends upon the object's material parameters. Each side to an object has different material parameters. The front side is determined by the triangle's winding, which is whether a triangle's points are clockwise in order or counterclockwise. Which direction is considered front-facing depends upon the #frontFace parameter. When LIGHT_MODEL_TWO_SIDE is false (the default), only the front face is used for lighting; if the object is facing away from the light, then it is considered to have no intensity.

Material parameters mirror the light's, and each is multiplied against the other. These parameters are #materialAmbient, #materialDiffuse, and #materialSpecular. Materials add the #materialEmission, which is an ambient value that is not multiplied against anything (thus, a (1, 1, 1) emission always results in a white object). They also add the #materialShininess, which is an exponent from 0 to 128, and determines how sharp the specular highlight is. The closer to 128 it gets, the sharper its edge. Finally, you can use #materialAmbientAndDiffuse to set both parameters to the same color.

You're not done with my short summary. Whether LIGHT_MODEL_LOCAL_VIEWER is enabled controls how the location of the specular highlight is determined. #lightModelAmbient is the final ambient multiplier to determine actual ambience; that makes it the best candidate for use.

How the specular highlight works is one of the deficiencies of OpenGL. Or, rather, how the resultant color is blended with the texture, which is to multiply it against it. That means that the specular highlight on a black nontextured object can be white, while the highlight on a black textured object will be black. To add the specular color onto the fragment color after multiplying the light and texture color together, #enable (SEPARATE_SPECULAR_COLOR).


Fogging interpolates a color onto the texture color. The effect of this color depends upon the depth of the point and the fogging parameters. Fogging is initially disabled; enable it using #enable (FOG).

The equation used to determine the fog factor depends upon the setting in #fogMode, which is initially EXP. The potential values and their equations (where e is the setting in #fogEnd, s is the setting in #fogStart, d is the setting in #fogDensity, and z is the depth of this point):

LINEAR: f = (e - z) / (e - s)
EXP: f = exp (-d * z)
EXP2: f = exp (-(d * z) ^ 2)

The result is then clamped to the range [0, 1]; if 1, the fog color is used (specified with #fogColor); if 0, the texture color is used; values in between are interpolated.

By default, the depth of a point is the distance of that point from the camera plane. This is not as accurate as using the distance from the camera, and results in objects being less fogged at the corner of the vision as they are at the center of it. If the proper extensions are supported, you can control this with #fogDistanceMode.

Fogging interferes with drawing by producing odd colors when the blend mode is not (ONE, ZERO) or (SRC_ALPHA, ONE_MINUS_SRC_ALPHA). You should find some way to accomodate that or disable fogging for these entities.


If you've been following along sequentially, you now know we have a textured, lit, fogged color. How this is put onto the screen is a matter of blending. When blending is disabled, we skip past the next paragraph. If #isEnabled (BLEND), however, we keep going.

The fragment color and the destination color (the color presently in the color buffer) are multiplied based on the settings in #blendFunc. The most popular setting is #blendFunc (SRC_ALPHA, ONE_MINUS_SRC_ALPHA). These two values are then combined using the #blendEquation; it's conceivable for this to be unsupported, in which case the colors are always added together.

We are now in the end-game. The color is now potentially blended again with the destination color if #enable (COLOR_LOGIC_OP); you control the logic operator used with #logicOp. I warn you that this is not an accelerated path on important hardware and may not be accelerated on any hardware.

Then the channels of the frame buffer are written to. Which channels are written depends upon the settings of the #colorMask. Once that is done we are finished with the fragment.

Rendering Primitives

Primitives are rendered in a #begin and #end section of code. See #begin for details on each mode; here I just cover some basics and fine details that don't belong anywhere else.

Each vertex of the primitive is defined using the #vertex command. This uses the current parameters, defined by: #color, #edgeFlag, #index, #normal, and #textureCoord.

Inside of the #begin/#end block, you can run a limited number of GL commands only. Specifically, these are available: #vertex, #color, #index, #normal, #textureCoord, #evalCoord, #evalPoint, #arrayElement, #material, #edgeFlag, #callList, and #callLists. No other command is guaranteed to work and should generate an error, although there are a small number of commands that error on GL discretion.

OpenGL does not declare how a polygon or quad is broken into triangles, and many implementations split them based on orientation, rather than their order of input. This results in severe visual discrepancies when lighting or when non-textured. Moreover, OpenGL allows polygons to be triangulated with no regard for concavity, so most implementations simply treat them as if they are convex, which results in incorrect rendering if that's not true. Therefore, you should not use POLYGON, QUADS, or QUAD_STRIP unless if it has a static orientation (such as for a screen icon), or if you're not lighting but are texturing.

2D Texturing

Textures are referred to by an abstract number called the name. GL has a different API from OpenGL's when it comes to textures. I call 2D textures textures (1D and 3D textures will have to get their own names) and name everything with a texture prefix, rather than ugly names like texXXX or arbitrary swapped names like bindTexture. There are two APIs for textures, using either names or an object. I only cover the object API here, as the name API is identical just more annoying.

You create a texture using #textureNew. See the GLtexture class for more details.

Public Member Functions

 this (Control parent)
 Register with the parent and create the widget.

void * findProcedure (char[] name)
 Get a function pointer return it.

 ~this ()
 Delete the OS-specific data.

void makeCurrent ()
 Make this the current OpenGL context.

void beginPaint ()
 Make this the current OpenGL context and prepare for painting.

void endPaint ()
 Finish painting; display the current buffer.

void fullscreen (bit value)
 Assign whether to use fullscreen mode.

Static Public Member Functions

 this ()
 Setup the registry.

Public Attributes

int[char[]] extensions
 This associative list contains a deconstructed form of the #getString (EXTENSIONS) command; each key is an extension, each value is false.

GLint textureUnitCount
 The number of texture units available.

Member Function Documentation

void CanvasGL.fullscreen bit  value  ) 

Assign whether to use fullscreen mode.

When fullscreen, this canvas envelops the entire screen and the mode is set to the suggested width and height. Resetting to windowed mode returns it to normal.

Member Data Documentation

int [char []] CanvasGL.extensions

This associative list contains a deconstructed form of the #getString (EXTENSIONS) command; each key is an extension, each value is false.

You can use it by the form: "'SGIS_fog_function' in extensions"; DON'T use "extensions ['SGIS_fog_function']" (values are cleared to false to make this invalid).

The documentation for this class was generated from the following file:
Generated on Thu Sep 4 13:12:51 2003 for dig by doxygen 1.3.2