Sunteți pe pagina 1din 76

OpenGL Tutorial

Table of Contents
OpenGL Tutorial...................................................................................................................................1 Introduction to OpenGL Tutorial..................................................................................................... 1 What is OpenGL?............................................................................................................................ 1 Part 0: Getting OpenGL Set Up....................................................................................................... 2 Lesson 0a: Getting OpenGL Set Up on Windows.......................................................................2 Lesson 0b: Getting OpenGL Set Up on Mac OS X.................................................................... 3 Lesson 0c: Getting OpenGL Set Up on Linux............................................................................ 4 Part 1: The Basics............................................................................................................................ 5 Lesson 1: Basic Shapes............................................................................................................... 5 Lesson 2: Transformations and Timers....................................................................................... 9 Lesson 3: Color......................................................................................................................... 15 Lesson 4: Lighting.................................................................................................................... 18 Lesson 5: Textures.....................................................................................................................23 Lesson 6: Putting It All Together.............................................................................................. 28 Part 2: Topics in 3D Programming................................................................................................ 33 Lesson 7: Terrain.......................................................................................................................33 Lesson 8: Drawing Text............................................................................................................ 42 Lesson 9: Animation................................................................................................................. 45 Lesson 10: Collision Detection................................................................................................. 57 Part 3: Special Effects....................................................................................................................74 Lesson 12: Alpha Blending....................................................................................................... 74

Introduction to OpenGL Tutorial


Here, you will find a tutorial for 3D programming in C using OpenGL. This tutorial is geared toward making games, but applies to all 3D programming, including simulation and modeling. This tutorial is designed to be as beginner-friendly as possible. It assumes that you have some knowledge of C, though it stays away from more advanced features such as virtual functions and const-correctness. I also assume that you are familiar with vectors and matrices. This tutorial is based on the one available at http://www.videotutorialsrock.com/index.php here adapted to the C language.

What is OpenGL?
In this lesson, you will learn what OpenGL is and how it enables you to program in 3D.

What exactly is OpenGL? It's a way to draw stuff in 3D. It can also be used for 2D drawing. The graphics card is where the 3D computation happens. The purpose of OpenGL is to communicate with the graphics card about your 3D scene. So why not talk to the graphics card directly? Each graphics card is a little different. In a sense, they all speak different "languages". To talk to them all, you can either learn all of their languages, or find a "translator" that knows all of their languages and talk to the translator, so that you only have to know one language. OpenGL serves as a "translator" for graphics cards.

Part 0: Getting OpenGL Set Up


In this part, you will learn how to get OpenGL and GLUT set up on Windows, Mac OS X, or Linux, so that you can get started making 3D programs. You can use OpenGL on other operating systems, but this tutorial doesn't cover how to get it set up on those OSes.

Lesson 0a: Getting OpenGL Set Up on Windows


This lesson will explain how to get OpenGL and GLUT set up on Windows. We'll use the Visual C+ + Express 2005 IDE to edit, compile, and run our programs. Visual C++ Express is a free IDE. To use it, you will have to register, which is free, within 30 days. You can use another IDE if you prefer, but getting it set up will be a little different. Downloading and Installing First, download and install the necessary software using the following instructions: 1. Download Visual C++ Express and the Microsoft Platform SDK from the Microsoft website. Note that when you download the SDK, it may say something about Windows Server. Don't worry about that; it'll install just fine on any modern version of Windows. 2. Install Visual C++ and the SDK. 3. Download the OpenGL installer from here and the GLUT binary from here. 4. Run the OpenGL installer. 5. Extract GLUT to the directory of your choice. You can do this by creating a new directory, locating and opening the ZIP file using Windows Explorer, and copying the files to the new directory using copy-paste. Alternatively, you can use a free program like WinZip to extract GLUT. 6. In the directory to which you extracted GLUT, make two folders, one called "include" and one called "lib". In the "include" folder, create another folder called "GL", and move glut.h to that folder. Move all of the other files that you extracted for GLUT into the "lib" folder. 7. Run Visual C++ Express. Go to Tools -> Options, then Projects and Solutions -> VC++ Directories. Note where it says "Show directories for". You'll want to change the directories for include files by adding "x\include", "y\include", and "z\Include" and to change the directories for library files by adding "x\lib", "y\lib", and "z\Lib", where "x" is the folder where you installed OpenGL, "y" is the folder where you extracted GLUT, and "z" is the folder where you installed the Microsoft Platform SDK. 8. Change your PATH environment variable as follows: go to the control panel, and go to System. Go to the "Advanced" tab and click on "Environment Variables". Find the "PATH" (or "Path") variable. Change it by adding ";x\lib;y\lib;z\Lib" (without the quotes) to the end of it, where again, "x", "y", and "z" are the folders where you installed OpenGL, GLUT, and the Microsoft Platform SDK. Make sure there are no spaces before 2

or after the semicolons. 9. Reboot your computer, so that Windows will recognize the changes to the PATH environment variable. Compiling and Running the Test Program (download here the zip file) To make sure that everything was set up correctly, we're going to see if we can get a test program to work. 1. Download this test program and extract it somewhere on your computer. 2. Run Visual Studio C++. Go to File -> New -> Project From Existing Code. 3. Click next to indicate that you are making a Visual C++ project. 4. Set the project file location to the folder to which you extracted the test program. Enter in a name for your project (such as "cube") and click next. 5. Change the project type from "Windows application project" to "Console application project", and click next. 6. Click next, then click finish to finish creating the project. 7. Go to Project -> Properties. Click on Configuration Properties. Click the "Configuration Manager" button in the upper-right corner. Change the "Active solution configuration" from "Debug" to "Release". Click close, then click OK. 8. In Project -> Properties, go to Configuration Properties -> General. Where it shows the output directory as "Release", backspace the word "Release", and click OK. This makes Visual C++ put the executable in the same directory as the source code, so when our program needs to open a file, it looks for it in that directory. In this case, the program will have to load in an image file called "vtr.bmp". 9. Go to Build -> Build project_name to build your project. 10. There should be two warnings about ignoring /INCREMENTAL. You don't have to, but if you want, you can fix them as follows. In Project -> Properties, go to Configuration Properties -> Linker, and change "Enable Incremental Linking" from "Yes (/Incremental)" to "No (/Incremental:No)". 11. Run the program by going to Debug -> Start Without Debugging. If all goes well, the test program should run. Note that you'll have to set up a project every time you want to work on a program from my site, so you'll have to repeat steps 1 - 11 above. I'd like to point out a couple of things about the program. First of all, notice that the project has a file called "Makefile". It's not used on Windows; it's only needed for Linux, Mac OS X, and other UNIX-based operating systems. But Visual C++ will automatically ignore the file, so you don't have to worry about removing it from the project.

Lesson 0b: Getting OpenGL Set Up on Mac OS X


Getting OpenGL and GLUT Set Up Setting up OpenGL and GLUT on Mac OS X is really easy. Just make sure you've installed the special developer programs, which should have come on your computer's recovery / install CD. That's it! Compiling and Running the Test Program (download here the zip file) Let's make sure that OpenGL works. Download the source code for this later lesson and extract it to any folder (e.g. using Finder). Then, from the command-line, use "cd" to change to the directory 3

where you extracted it. Enter "make" to compile the program, and make sure there are no error messages. Then, enter "./cube" to run the program, and make sure that it runs. Press ESC to exit the program when you're done. Compiling and Editing To my knowledge, the standard way of editing and compiling a C program in Mac is to edit it with a text editor, such as TextEdit, or Emacs or Vim for advanced users, and compiling it using the "make" command like in the previous section. (Unfortunately, I don't know of any better text editors out there for Mac.) Look at the contents of "Makefile" in the source directory. It is an important file used by "make" to figure out how to compile our program. The line "PROG = cube" tells "make" that we want the executable file to be called "cube". The line "SRCS = main.c imageloader.c" tells "make" the files that we need it to compile. You'll have to edit these lines if you ever want to change the name of the executable or change the source files that "make" compiles.

Lesson 0c: Getting OpenGL Set Up on Linux


Downloading and Installing the OpenGL and GLUT Developer Files To get OpenGL working on Linux, you need to make sure that you have gl.h and glut.h, among other files. They're usually located in /usr/include/GL/. If you have these files, then most likely you're already set, and you can skip to the next section in this lesson. If you don't already have them, you'll have to download and install the OpenGL and GLUT development libraries. The best way to do this is to find the appropriate package files. It's best to avoid downloading and compiling the source code for the libraries, as there are lots more things that can go wrong and for various reasons tends to be very annoying. What package files you'll need depends on the distribution of Linux that you're using. If you're using Debian, you'll need a .deb file. To find out what package you need, you can go to the Debian packages website and search for files packages containing gl.h and glut.h, located in /usr/include/GL/. A quick search now reveals that the freeglut3-dev and mesa-common-dev packages provide these files, but this may change, and you may need a different package for an NVIDIA graphics card. To install the packages in Debian, the easiest thing to do is to open a terminal and enter "sudo apt-get install package1 package2"; this will take care of downloading and installing the packages and getting the other packages on which the OpenGL and GLUT development packages depend. Another common package file format is .rpm, used in the Red Hat distribution and other distributions. To install the developer files using RPM packages, you'll want to search (e.g. using Google) for an RPM file that provides gl.h and glut.h, and install them, e.g. by double-clicking them in a file explorer. Different distributions have package formats other than .deb and .rpm; the process for them is similar. If you find a package for a different distribution than the one you're using, you can usually convert it to the file format used in your distribution using the command "sudo alien package_name". Compiling and Running the Test Program (download here the zip file) Once you have gl.h and glut.h, make sure that OpenGL works. You can do this by downloading the source code for this later lesson and testing it out. Download the ZIP file and extract it to a folder (e.g. using a file explorer). Then, from the command-line, use "cd" to change to the directory where you extracted it. Enter "make" to compile the program, and make sure there are no error messages. Then, enter "./cube" to run the program, and make sure that it runs. Press ESC to exit the program when you're done. 4

If there was an error, most likely you don't have the developer packages installed properly. But if they are installed, you'll have to figure out how to fix the problem. Googling the error message that you get isn't a bad starting point. You can also find an online forum and ask someone there.

Part 1: The Basics


Lesson 1: Basic Shapes
Let's take a look at our first OpenGL program. Download the "basic shapes" program, and compile and run it (details on how to do that can be found in "Part 0: Getting OpenGL Set Up"). Take a look at it, and hit ESC when you're done. It should look like the following image:

Overview of How the Program Works How does the program work? The basic idea is that we tell OpenGL the 3D coordinates of all of the vertices of our shapes. OpenGL uses the standard x and y axes, with the positive x direction pointing toward the right and the positive y direction pointing upward. However, in 3D we need another dimension, the z dimension. The positive z direction points out of the screen.

How does OpenGL use these 3D coordinates? It simulates the way that our eyes work. Take a look at the following picture.

OpenGL converts all of the 3D points to pixel coordinates before it draws anything. To do this, it draws a line from each point in the scene to your eye and takes the intersection of the lines and the screen rectangle, as in the above picture. So, when OpenGL wants to draw a triangle, it converts the three vertices into pixel coordinates and draws a "2D" triangle using those coordinates. 5

The user's "eye" is always at the origin and looking in the negative z direction. Of course, OpenGL doesn't draw anything that is behind the "eye". (After all, it isn't the all-seeing eye of Sauron.) How far away is the screen rectangle from your eye? Actually, it doesn't matter. No matter how far away the screen rectangle is, a given 3D point will map to the same pixel coordinates. All that matters is the angle that your eye can see. Going Through the Source Code All of this stuff about pixel coordinates is great and all, but as programmers, we want to see some code. Take a look at main.c. The first thing you'll notice is the license indicating that the code, like all the code on this tutorial, is completely free. That's right, F-R-E-E. You can even use it in commercial projects. The second thing you'll notice is that it's heavily commented, so much so that it's a bit of an eye sore. That's because this is the first lesson. Other lessons will not be so heavily commented, but they'll still have comments. Let's go through the file and see if we can understand what it's doing.
#include <stdlib.h> //Needed for "exit" function //Include OpenGL header files, so that we can use OpenGL #ifdef __APPLE__ #include <OpenGL/OpenGL.h> #include <GLUT/glut.h> #else #include <GL/glut.h> #endif

First, we include our header files. Pretty standard stuff for C. If we're using a Mac, we want our program to include GLUT/glut.h and OpenGL/OpenGL.h; otherwise, we include GL/glut.h.
//Called when a key is pressed void handleKeypress(unsigned char key, //The key that was pressed int x, int y) { //The current mouse coordinates switch (key) { case 27: //Escape key exit(0); //Exit the program } }

This function handles any keys pressed by the user. For now, all that it does is quit the program when the user presses ESC, by calling exit. The function is passed the x and y coordinates of the mouse, but we don't need them.
//Initializes 3D rendering void initRendering() { //Makes 3D drawing work when something is in front of something else glEnable(GL_DEPTH_TEST); }

The initRendering function initializes our rendering parameters. For now, it doesn't do much. We'll pretty much always want to call glEnable(GL_DEPTH_TEST) when we initialize rendering. The call makes sure that an object shows up behind an object in front of it that has already been drawn, which we want to happen. Note that glEnable, like every OpenGL function, begins with "gl". 6

//Called when the window is resized void handleResize(int w, int h) { //Tell OpenGL how to convert from coordinates to pixel values glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); //Switch to setting the camera perspective //Set the camera perspective glLoadIdentity(); //Reset the camera gluPerspective(45.0, (double)w / (double)h, 1.0, 200.0);

//The //The //The //The

camera angle width-to-height ratio near z clipping coordinate far z clipping coordinate

The handleResize function is called whenever the window is resized. w and h are the new width and height of the window. The content of handleResize will be not change much in our other projects, so you don't have to worry about it too much. There are a couple of things to notice. When we pass 45.0 to gluPerspective, we're telling OpenGL the angle that the user's eye can see. The 1.0 indicates not to draw anything with a z coordinate of greater than -1. This is so that when something is right next to our eye, it doesn't fill up the whole screen. The 200.0 tells OpenGL not to draw anything with a z coordinate less than -200. We don't care very much about stuff that's really far away. So, why does gluPerspective begin with "glu" instead of "gl"? That's because technically, it's a GLU (GL Utility) function. In addition to "gl" and "glu", some functions we call will begin with "glut" (GL Utility Toolkit). We won't really worry about the difference among OpenGL, GLU, and GLUT.
//Draws the 3D scene void drawScene() { //Clear information from last draw glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

The drawScene function is where the 3D drawing actually occurs. First, we call glClear to clear information from the last time we drew. In most every OpenGL program, you'll want to do this.
glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective glLoadIdentity(); //Reset the drawing perspective

For now, we'll ignore this. It'll make sense after the next lesson, which covers transformations.
glBegin(GL_QUADS); //Begin quadrilateral coordinates //Trapezoid glVertex3f(-0.7f, -1.5f, -5.0f); glVertex3f(0.7f, -1.5f, -5.0f); glVertex3f(0.4f, -0.5f, -5.0f); glVertex3f(-0.4f, -0.5f, -5.0f); glEnd(); //End quadrilateral coordinates

Here, we begin the substance of our program. This part draws the trapezoid. To draw a trapezoid, we call glBegin(GL_QUADS) to tell OpenGL that we want to start drawing quadrilaterals. Then, we specify the four 3D coordinates of the vertices of the trapezoid, in order, using calls to glVertex3f. When we call glVertex3f, we are specifying three (that's where the "3" comes 7

from) float (that's where the "f" comes from) coordinates. Then, since we're done drawing quadrilaterals, we call glEnd(). Note that every call to glBegin must have a matching call to glEnd. All of the "f"'s after the vertex coordinates force the compiler to treat the numbers as floats.
glBegin(GL_TRIANGLES); //Begin triangle coordinates //Pentagon glVertex3f(0.5f, 0.5f, -5.0f); glVertex3f(1.5f, 0.5f, -5.0f); glVertex3f(0.5f, 1.0f, -5.0f); glVertex3f(0.5f, 1.0f, -5.0f); glVertex3f(1.5f, 0.5f, -5.0f); glVertex3f(1.5f, 1.0f, -5.0f); glVertex3f(0.5f, 1.0f, -5.0f); glVertex3f(1.5f, 1.0f, -5.0f); glVertex3f(1.0f, 1.5f, -5.0f);

Now, we draw the pentagon. To draw it, we split it up into three triangles, which is pretty standard for OpenGL. We start by calling glBegin(GL_TRIANGLES) to tell OpenGL that we want to draw triangles. Then, we tell it the coordinates of the vertices of the triangles. OpenGL automatically puts the coordinates together in groups of three. Each group of three coordinates represents one triangle.
//Triangle glVertex3f(-0.5f, 0.5f, -5.0f); glVertex3f(-1.0f, 1.5f, -5.0f); glVertex3f(-1.5f, 0.5f, -5.0f);

Finally, we draw the triangle. We haven't called glEnd() to tell OpenGL that we're done drawing triangles yet, so it knows that we're still giving it triangle coordinates.
glEnd(); //End triangle coordinates

Now, we're done drawing triangles, so we call glEnd(). Note that we could have drawn the above four triangles using four calls to glBegin(GL_TRIANGLES) and four accompanying calls to glEnd(). However, this makes the program slower, and you shouldn't do it. There are other things we can pass to glBegin in addition to GL_TRIANGLES and GL_QUADS, but triangles and quadrilaterals are the most common things to draw.
} glutSwapBuffers(); //Send the 3D scene to the screen

This line makes OpenGL actually move the scene to the window. We'll call it whenever we're done drawing a scene.
int main(int argc, char** argv) { //Initialize GLUT glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize(400, 400); //Set the window size //Create the window

glutCreateWindow("Basic Shapes - videotutorialsrock.com"); initRendering(); //Initialize rendering

This is the program's main function. We start by initializing GLUT. Again, something similar will appear in all of our programs, so you don't have to worry too much about it. In the call to glutInitWindowSize, we set the window to be 400x400. When we call glutCreateWindow, we tell it what title we want for the window. Then, we call initRendering, the function that we wrote to initialize OpenGL rendering.
//Set handler functions for drawing, keypresses, and window resizes glutDisplayFunc(drawScene); glutKeyboardFunc(handleKeypress); glutReshapeFunc(handleResize);

Now, we point GLUT to the functions that we wrote to handle keypresses and drawing and resizing the window. One important thing to note: we're not allowed to draw anything except inside the drawScene function that we explicitly give to GLUT, or inside functions that drawScene calls (or functions that they call, etc.).
glutMainLoop(); //Start the main loop. return 0; //This line is never reached } glutMainLoop doesn't return.

Next, we call glutMainLoop, which tells GLUT to do its thing. This is, we tell GLUT to capture key and mouse input, to draw the scene when it has to by calling our drawScene function, and to do some other stuff. glutMainLoop, like a defective boomerang, never returns. GLUT just takes care of the rest of our program's execution. After the call, we have return 0 so that the compiler doesn't complain about the main function not returning anything, but the program will never get to that line. And that's how our first OpenGL program works. You may want to try the exercises to get more familiar with what you just learned. Exercises Make the pentagon farther away from the camera (or eye) than the other shapes. What's your favorite quadrilateral? (Romance tip: a great question to ask on a first date.) Replace the triangle with that shape. Draw the trapezoid using glBegin(GL_TRIANGLES) instead of glBegin(GL_QUADS)

Lesson 2: Transformations and Timers


Transformations Our last program was kind of lame. Aren't we supposed to be doing 3D programming? It looked pretty 2D. Let's make things a bit more interesting. We'll make the shapes rotate in 3D. To do this, we'll have to understand a little about transformations in OpenGL. To think of them, imagine a bird flying around the scene. It starts out at the origin, facing the negative z direction. The bird can move, rotate, and even grow or shrink. Whenever we specify points to OpenGL using glVertex, OpenGL interprets them relative to our bird. So, if we shrink the bird by a factor of 2 and then move it 2 units to the right, from its perspective, then the point (0, 4, 0) relative to the bird is actually at (1, 2, 0). If instead, we rotate the bird 90 degrees about the x-axis and move it 2 units up, the point (0, 0, -1) relative to the bird is (0, -1, -2) in world coordinates. This is shown in the below 9

picture, with my bird that I made out of silly putty. Note that to see it better, we're viewing the scene from the side.

At this point, you may be thinking, "This is stupid. Why don't we just specify all of the points directly?" Just hang on. This will become clear later in the course of this lesson. We're going to start with the code from the last lesson, with some of the comments removed. First of all, instead of using -5 for the z coordinates of all of the points, let's just translate our bird 5 units forward, and then use 0 for their z coordinates. We translate by using a call to glTranslatef, with the amount that we want to translate in the x, y, and z directions.
glLoadIdentity(); //Reset the drawing perspective glTranslatef(0.0f, 0.0f, -5.0f); //Move forward 5 units glBegin(GL_QUADS); //Trapezoid glVertex3f(-0.7f, -1.5f, 0.0f); glVertex3f(0.7f, -1.5f, 0.0f); glVertex3f(0.4f, -0.5f, 0.0f); glVertex3f(-0.4f, -0.5f, 0.0f); glEnd(); glBegin(GL_TRIANGLES); //Pentagon glVertex3f(0.5f, 0.5f, 0.0f); glVertex3f(1.5f, 0.5f, 0.0f); glVertex3f(0.5f, 1.0f, 0.0f); glVertex3f(0.5f, 1.0f, 0.0f); glVertex3f(1.5f, 0.5f, 0.0f); glVertex3f(1.5f, 1.0f, 0.0f); glVertex3f(0.5f, 1.0f, 0.0f); glVertex3f(1.5f, 1.0f, 0.0f); glVertex3f(1.0f, 1.5f, 0.0f); //Triangle glVertex3f(-0.5f, 0.5f, 0.0f); glVertex3f(-1.0f, 1.5f, 0.0f); glVertex3f(-1.5f, 0.5f, 0.0f); glEnd();

If we compile and run the program with these changes, it works the same, which is what we want. I'd glossed over the meaning of the call to glLoadIdentity() in the last lesson. What it does is 10

it resets our bird, so that it is at the origin and is facing in the negative z direction. Now let's use some more translating, so that whenever we specify points for a shape, they are relative to the shape's center.
glLoadIdentity(); //Reset the drawing perspective glTranslatef(0.0f, 0.0f, -5.0f); //Move forward 5 units glPushMatrix(); //Save the transformations performed thus far glTranslatef(0.0f, -1.0f, 0.0f); //Move to the center of the trapezoid glBegin(GL_QUADS); //Trapezoid glVertex3f(-0.7f, -0.5f, 0.0f); glVertex3f(0.7f, -0.5f, 0.0f); glVertex3f(0.4f, 0.5f, 0.0f); glVertex3f(-0.4f, 0.5f, 0.0f); glEnd(); glPopMatrix(); //Undo the move to the center of the trapezoid glPushMatrix(); //Save the current state of transformations glTranslatef(1.0f, 1.0f, 0.0f); //Move to the center of the pentagon glBegin(GL_TRIANGLES); //Pentagon glVertex3f(-0.5f, -0.5f, 0.0f); glVertex3f(0.5f, -0.5f, 0.0f); glVertex3f(-0.5f, 0.0f, 0.0f); glVertex3f(-0.5f, 0.0f, 0.0f); glVertex3f(0.5f, -0.5f, 0.0f); glVertex3f(0.5f, 0.0f, 0.0f); glVertex3f(-0.5f, 0.0f, 0.0f); glVertex3f(0.5f, 0.0f, 0.0f); glVertex3f(0.0f, 0.5f, 0.0f); glEnd(); glPopMatrix(); //Undo the move to the center of the pentagon glPushMatrix(); //Save the current state of transformations glTranslatef(-1.0f, 1.0f, 0.0f); //Move to the center of the triangle glBegin(GL_TRIANGLES); //Triangle glVertex3f(0.5f, -0.5f, 0.0f); glVertex3f(0.0f, 0.5f, 0.0f); glVertex3f(-0.5f, -0.5f, 0.0f); glEnd(); glPopMatrix(); //Undo the move to the center of the triangle

Again, if we compile and run these changes, the program works the same. There are two new and important functions used in this code: glPushMatrix() and glPopMatrix(). We use them to save and restore the state of our bird. glPushMatrix saves 11

its state, and glPopMatrix restores it. Note that, like glBegin and glEnd, each call to glPushMatrix must have a corresponding call to glPopMatrix. We have to save the state of our bird using glPushMatrix in order to undo the move to the center of the shapes. We can save more than one bird state at a time. In fact, we have a stack of saved states. Every time we call glPushMatrix, we add a state to the top of the stack, and every time we call glPopMatrix, we restore and remove the state at the top of the stack. The stack can store up to at least 32 different transformation states. glPushMatrix and glPopMatrix are so named because OpenGL uses matrices to represent the state of our bird. For now, you don't have to worry about how exactly the matrices work. And now, we'll actually change what our program does. Let's make all of the shapes rotated by 30 degrees and shrink the pentagon to 70% of its original size.
float _angle = 30.0f; //Draws the 3D scene void drawScene() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective glLoadIdentity(); //Reset the drawing perspective glTranslatef(0.0f, 0.0f, -5.0f); //Move forward 5 units glPushMatrix(); //Save the transformations performed thus far glTranslatef(0.0f, -1.0f, 0.0f); //Move to the center of the trapezoid glRotatef(_angle, 0.0f, 0.0f, 1.0f); //Rotate about the z-axis glBegin(GL_QUADS); //Trapezoid glVertex3f(-0.7f, -0.5f, 0.0f); glVertex3f(0.7f, -0.5f, 0.0f); glVertex3f(0.4f, 0.5f, 0.0f); glVertex3f(-0.4f, 0.5f, 0.0f); glEnd(); glPopMatrix(); //Undo the move to the center of the trapezoid glPushMatrix(); //Save the current state of transformations glTranslatef(1.0f, 1.0f, 0.0f); //Move to the center of the pentagon glRotatef(_angle, 0.0f, 1.0f, 0.0f); //Rotate about the y-axis glScalef(0.7f, 0.7f, 0.7f); //Scale by 0.7 in the x, y, and z directions glBegin(GL_TRIANGLES); //Pentagon glVertex3f(-0.5f, -0.5f, 0.0f); glVertex3f(0.5f, -0.5f, 0.0f); glVertex3f(-0.5f, 0.0f, 0.0f); glVertex3f(-0.5f, 0.0f, 0.0f); glVertex3f(0.5f, -0.5f, 0.0f); glVertex3f(0.5f, 0.0f, 0.0f); glVertex3f(-0.5f, 0.0f, 0.0f); glVertex3f(0.5f, 0.0f, 0.0f); glVertex3f(0.0f, 0.5f, 0.0f);

12

glEnd(); glPopMatrix(); //Undo the move to the center of the pentagon glPushMatrix(); //Save the current state of transformations glTranslatef(-1.0f, 1.0f, 0.0f); //Move to the center of the triangle glRotatef(_angle, 1.0f, 2.0f, 3.0f); //Rotate about the the vector (1, 2, 3) glBegin(GL_TRIANGLES); //Triangle glVertex3f(0.5f, -0.5f, 0.0f); glVertex3f(0.0f, 0.5f, 0.0f); glVertex3f(-0.5f, -0.5f, 0.0f); glEnd(); glPopMatrix(); //Undo the move to the center of the triangle

Now, our program looks like this:

We introduced a new variable, _angle, which stores the number of degrees by which we want to rotate our shapes. We also use two new functions. We call glRotatef, which rotates our bird. Our call to glRotatef(_angle, 0.0f, 0.0f, 1.0f) rotates our bird by _angle degrees about the z-axis, while our call to glRotatef(_angle, 1.0f, 2.0f, 3.0f) rotates our bird by _angle degrees about the vector (1, 2, 3). We also call glScalef(0.7f, 0.7f, 0.7f), which shrinks our bird to 70% of its original size in the x, y, and z directions. If we were to call glScalef(2.0f, 1.0f, 1.0f) instead, we would double its size in the horizontal direction, according to its perspective. It is important to note that glTranslatef, glRotatef, and glScalef, may not be called in a glBegin-glEnd block. Now, let's change the camera angle so that we look 10 degrees to the left.
float _cameraAngle = 10.0f; //Draws the 3D scene void drawScene() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective glLoadIdentity(); //Reset the drawing perspective glRotatef(-_cameraAngle, 0.0f, 1.0f, 0.0f); //Rotate the camera glTranslatef(0.0f, 0.0f, -5.0f); //Move forward 5 units

Our program looks like this: 13

Observe that we use a special trick to change the camera angle. We just rotated the entire scene by 10 degrees in the opposite direction. This is a useful technique that you'll use a lot in 3D programming. Before we move on to timers, I'd like to explain glMatrixMode. If we call glMatrixMode(GL_MODEL_VIEW), we switch to setting transformations for the points in the scene. If we call glMatrixMode(GL_PROJECTION), like we did in handleResize, we switch to setting a special transformation that is applied to our points in addition to the normal transformations. Take a look at handleResize. We switched to the projection matrix mode, called glLoadIdentity() to reset all of its transformation and called gluPerspective. gluPerspective performs a weird transformation that gives our points "perspective". Don't worry about how exactly it works. You just have to know that we use GL_PROJECTION to set up our perspective and GL_MODEL_VIEW for everything else. GL_PROJECTION is sometimes described as the transformation for the camera, but this isn't exactly accurate, because light sources aren't affected by the transformations in "projection" mode. It's a bad idea to use it for setting the camera. Now that we changed the camera angle, it's harder to see everything, so let's just change _cameraAngle to 0. Timers And now, let's add some motion using GLUT timers. The basic idea behind timers is that we want some piece of code to execute every so often. In this case, let's rotate the shapes by 2 degrees every 25 milliseconds. Here's how we do it.
void update(int value) { _angle += 2.0f; if (_angle > 360) { _angle -= 360; } glutPostRedisplay(); //Tell GLUT that the scene has changed //Tell GLUT to call update again in 25 milliseconds glutTimerFunc(25, update, 0);

Here's our update function. First, we increase the angle by 2. If it gets above 360 degrees, we subtract 360, which doesn't change the angle that the variable indicates. We don't actually have to do that, but it's better to keep angles small, because of issues related to float precision. I won't really go into detail about that here. Then, we call glutPostRedisplay(), which tells GLUT that the 14

scene has changed and makes sure that that GLUT redraws it. Finally, we call glutTimerFunc(25, update, 0), which tells GLUT to call update again in 25 milliseconds. The value parameter is something that GLUT passes to our update function. It is the same as the last parameter we passed to glutTimerFunc for that function, so it will always be 0. We don't need to use the parameter, so we just ignore it.
glutTimerFunc(25, update, 0); //Add a timer

We add another call to glutTimerFunc to our main function, so that GLUT calls it for the first time 25 milliseconds after the program starts. That's it. Give the program a go. Download the source code, compile the program, and run it. Marvel at our accomplishment; we now have rotating shapes. Exercises Using one additional function call, make all of the shapes half their size, without changing any of the calls to glVertex3f. Using one additional call to glTranslatef, move the triangle and the pentagon half a unit to the right, without changing any of the calls to glVertex3f. Make the shapes rotate at different speeds without adding any extra timers. Using timers, but without adding another timer, make the camera rotate continuously about the vector (0.1, -0.1, 1). (Don't stare at it for too long; you might get hypnotized.)

Lesson 3: Color
Now let's add a little color to our program. We'll start with the code from the previous lesson, with some of the comments removed. First, we add glEnable(GL_COLOR_MATERIAL) to the end of initRendering, in order to enable colors. Let's make a couple of changes to drawScene.
glColor3f(0.5f, 0.0f, 0.8f); glBegin(GL_QUADS); //Trapezoid glVertex3f(-0.7f, -0.5f, 0.0f); glVertex3f(0.7f, -0.5f, 0.0f); glVertex3f(0.4f, 0.5f, 0.0f); glVertex3f(-0.4f, 0.5f, 0.0f); glEnd(); glPopMatrix(); glPushMatrix(); glTranslatef(1.0f, 1.0f, 0.0f); glRotatef(_angle, 0.0f, 1.0f, 0.0f); glScalef(0.7f, 0.7f, 0.7f); glBegin(GL_TRIANGLES); glColor3f(0.0f, 0.75f, 0.0f); //Pentagon glVertex3f(-0.5f, -0.5f, 0.0f); glVertex3f(0.5f, -0.5f, 0.0f); glVertex3f(-0.5f, 0.0f, 0.0f); glVertex3f(-0.5f, 0.0f, 0.0f);

15

glVertex3f(0.5f, -0.5f, 0.0f); glVertex3f(0.5f, 0.0f, 0.0f); glVertex3f(-0.5f, 0.0f, 0.0f); glVertex3f(0.5f, 0.0f, 0.0f); glVertex3f(0.0f, 0.5f, 0.0f); glEnd(); glPopMatrix(); glPushMatrix(); glTranslatef(-1.0f, 1.0f, 0.0f); glRotatef(_angle, 1.0f, 2.0f, 3.0f); glColor3f(0.0f, 0.65f, 0.65f); glBegin(GL_TRIANGLES); //Triangle glVertex3f(0.5f, -0.5f, 0.0f); glVertex3f(0.0f, 0.5f, 0.0f); glVertex3f(-0.5f, -0.5f, 0.0f); glEnd();

After we make these changes, our program looks like this.

The changes are simple enough. We just added two calls to glColor3f. Whenever we call glColor3f, we change the current color to the indicated RGB color. Everything we draw afterwards is drawn using the new current color. RGB is a very common way to represent colors on computers. Using the RGB system, we specify each color as a combination of red, blue, and green light components, where each component ranges from 0 to 1. If you're not familiar with RGB, I recommend that you look it up online and become familiar with it. Note that unlike with transformation functions, we can can glColor3f between glBeginglEnd blocks; this is what we do in the second call to glColor3f. Now, let's do a little color blending. We'll make the following changes to the code:
glBegin(GL_QUADS); //Trapezoid glColor3f(0.5f, 0.0f, 0.8f); glVertex3f(-0.7f, -0.5f, 0.0f);

16

glColor3f(0.0f, 0.9f, 0.0f); glVertex3f(0.7f, -0.5f, 0.0f); glColor3f(1.0f, 0.0f, 0.0f); glVertex3f(0.4f, 0.5f, 0.0f); glColor3f(0.0f, 0.65f, 0.65f); glVertex3f(-0.4f, 0.5f, 0.0f); glEnd(); glPopMatrix(); glPushMatrix(); glTranslatef(1.0f, 1.0f, 0.0f); glRotatef(_angle, 0.0f, 1.0f, 0.0f); glScalef(0.7f, 0.7f, 0.7f); glBegin(GL_TRIANGLES); glColor3f(0.0f, 0.75f, 0.0f); //Pentagon glVertex3f(-0.5f, -0.5f, 0.0f); glVertex3f(0.5f, -0.5f, 0.0f); glVertex3f(-0.5f, 0.0f, 0.0f); glVertex3f(-0.5f, 0.0f, 0.0f); glVertex3f(0.5f, -0.5f, 0.0f); glVertex3f(0.5f, 0.0f, 0.0f); glVertex3f(-0.5f, 0.0f, 0.0f); glVertex3f(0.5f, 0.0f, 0.0f); glVertex3f(0.0f, 0.5f, 0.0f); glEnd(); glPopMatrix(); glPushMatrix(); glTranslatef(-1.0f, 1.0f, 0.0f); glRotatef(_angle, 1.0f, 2.0f, 3.0f); glBegin(GL_TRIANGLES); //Triangle glColor3f(1.0f, 0.7f, 0.0f); glVertex3f(0.5f, -0.5f, 0.0f); glColor3f(1.0f, 1.0f, 1.0f); glVertex3f(0.0f, 0.5f, 0.0f); glColor3f(0.0f, 0.0f, 1.0f); glVertex3f(-0.5f, -0.5f, 0.0f); glEnd();

Here's what our program looks like now:

17

We can use a different color for each vertex, and OpenGL will automatically blend smoothly between the colors of the different vertices. Just one more thing. Let's change the background color from black to sky blue. To do this, we just add a call to glClearColor(0.7f, 0.9f, 1.0f, 1.0f) to the end of initRendering. The first three parameters are the RGB color of the background. We just put 1 for the last value; you don't have to worry about what it means. Now we have the following:

That's it. Give the program a go. Download the source code, compile the program, and run it. Exercises Change the trapezoid to be your eye color. Don't change the colors of the other shapes. Using timers, make the pentagon switch instantly from red to green, then to blue, then back to red, and so on, so that it changes color every second.

Lesson 4: Lighting
One way we can make our scenes look cooler is by adding light to them. In this lesson, we're going to scrap the scene from our previous lessons and make a new one. We'll make a box-shaped object, with the top and bottom removed. Look at the source code. The first new thing is the call to glEnable(GL_LIGHTING) in initRendering. This enables lighting. Note that we can call glDisable(GL_LIGHTING) if we ever want to turn it back off. After that, we call glEnable(GL_LIGHT0) and 18

glEnable(GL_LIGHT1) to enable two light sources, numbered 0 and 1. (You can disable the individual light sources by calling glDisable(GL_LIGHT0) and glDisable(GL_LIGHT1).) We have more than two lights at our disposal if we need them, using GL_LIGHT2, GL_LIGHT3, etc. There are guaranteed to be at least eight possible lights. Then, we call glEnable(GL_NORMALIZE). We'll get to what that does later in this lesson. Now, go to the drawScene function.
//Add ambient light GLfloat ambientColor[] = {0.2f, 0.2f, 0.2f, 1.0f}; //Color(0.2, 0.2, 0.2) glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientColor);

First, we add some ambient light, which shines the same amount on every face in our scene. Ambient light is sort of like light that's shining everywhere. In the real world, there's no such thing, but in computer graphics, it's really hard to simulate light sources so well that no surface is completely unlit, so we use ambient lighting to simplify our life. To add ambient light, we call glLightModelfv with GL_LIGHT_MODEL_AMBIENT as the first argument and an array of four GLfloats for the second argument. The compiler will automatically convert floats to GLfloats, as above. The first three floats represent the RGB intensity of the light. We want to add white ambient light that isn't very intense, so we use red, green, and blue components of intensity 0.2. Note that the values don't exactly represent a color; they represent an intensity of light. So you could have (2, 2, 2) as the ambient light's intensity, even though this isn't a color. An ambient light intensity of (1, 1, 1) without any other light sources would look the same as in the last lesson, when we didn't have any lighting. The fourth float we just put as 1.
//Add positioned light GLfloat lightColor0[] = {0.5f, 0.5f, 0.5f, 1.0f}; //Color (0.5, 0.5, 0.5) GLfloat lightPos0[] = {4.0f, 0.0f, 8.0f, 1.0f}; //Positioned at (4, 0, 8) glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor0); glLightfv(GL_LIGHT0, GL_POSITION, lightPos0);

Here, we've added a light source. We call glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor0) to set the color / intensity of the light. We want it to be somewhat intense, so we make the intensity (0.5, 0.5, 0.5). Again, the fourth element in our array is 1. We want to position it at (4, 0, 8) relative to the current transformation, so we call glLightfv(GL_LIGHT0, GL_POSITION, lightPos0) with the array {4, 0, 8, 1}. The first three elements of the array are the position, and the last element is just 1 again.
//Add directed light GLfloat lightColor1[] = {0.5f, 0.2f, 0.2f, 1.0f}; //Color (0.5, 0.2, 0.2) //Coming from the direction (-1, 0.5, 0.5) GLfloat lightPos1[] = {-1.0f, 0.5f, 0.5f, 0.0f}; glLightfv(GL_LIGHT1, GL_DIFFUSE, lightColor1); glLightfv(GL_LIGHT1, GL_POSITION, lightPos1);

Now, we set up our second light source. We make it red, with an intensity of (0.5, 0.2, 0.2). Instead of giving it a fixed position, we want to make it directional, so that it shines the same amount across our whole scene in a fixed direction. To do that, we need to use 0 as the last element in lightPos1. When we do that, instead of the first three elements' representing the light's position, they represent the direction from which the light is shining, relative to the current transformation state. Note that glLightfv cannot be called inside a glBegin-glEnd block. A good rule of thumb is 19

that if something doesn't have to be allowed in a glBegin-glEnd block, it isn't allowed. Here's the next part of drawScene, with all of the commented out lines removed.
glRotatef(_angle, 0.0f, 1.0f, 0.0f); glColor3f(1.0f, 1.0f, 0.0f); glBegin(GL_QUADS); //Front glNormal3f(0.0f, 0.0f, 1.0f); glVertex3f(-1.5f, -1.0f, 1.5f); glVertex3f(1.5f, -1.0f, 1.5f); glVertex3f(1.5f, 1.0f, 1.5f); glVertex3f(-1.5f, 1.0f, 1.5f); //Right glNormal3f(1.0f, glVertex3f(1.5f, glVertex3f(1.5f, glVertex3f(1.5f, glVertex3f(1.5f, 0.0f, 0.0f); -1.0f, -1.5f); 1.0f, -1.5f); 1.0f, 1.5f); -1.0f, 1.5f);

//Back glNormal3f(0.0f, 0.0f, -1.0f); glVertex3f(-1.5f, -1.0f, -1.5f); glVertex3f(-1.5f, 1.0f, -1.5f); glVertex3f(1.5f, 1.0f, -1.5f); glVertex3f(1.5f, -1.0f, -1.5f); //Left glNormal3f(-1.0f, glVertex3f(-1.5f, glVertex3f(-1.5f, glVertex3f(-1.5f, glVertex3f(-1.5f, glEnd(); 0.0f, 0.0f); -1.0f, -1.5f); -1.0f, 1.5f); 1.0f, 1.5f); 1.0f, -1.5f);

We put in special function calls telling OpenGL the "normals" of the different shapes in our scene. A face's normal is a vector that is perpendicular to the face. OpenGL needs to know the normals to figure out at what angle a light shines on a face. If a light shines directly on a face, the face is brighter than if the light shines at an angle. The reason OpenGL doesn't figure out the normals itself is that it would be slower than figuring them out in advance, and it doesn't let us do smooth shading, as we will later in this lesson. As an example, the first face we draw is parallel to the x-y plane. It is perpendicular to the z-axis, so our normal is (0, 0, 1). We tell OpenGL this by calling glNormal3f(0.0f, 0.0f, 1.0f) right before we specify the coordinates of the face. It is important that the normal points "outward", because if a light is shining in the same direction a shape is facing, then it shouldn't be lit. At any rate, that's how it is with closed surfaces; the light will hit another part of the surface before it reaches the face. In initRendering, we had called glEnable(GL_NORMALIZE). This makes OpenGL automatically normalize our normals, so that they have a length of 1, which is the form in which OpenGL needs the normals. We could do this ourselves, but functions such as glScalef affect how we have to do it. I'll cover this in more detail in a later lesson. Here's what our program looks like:

20

Our program has a box, with a camera that rotates around the box. Notice that we have one face that is reddish, as it receives most of the red light, one face that is bright yellow, one face that is somewhat dark yellow, and one face that is very dark yellow, which receives no light other than ambient light. The last face would be completely black if there were no ambient light. There's one more important concept. A lot of the time, a set of polygons is meant to approximate a smooth shape, such as a sphere. In this case, we might want the faces to be shaded smoothly. Look at the example below:

Both of the pictures have the same set of polygons. Both are meant to look like a sphere. But the one on the left doesn't use smooth shading, so it looks a lot less, well, smooth. The one on the right looks a lot more like a sphere. But it's still essentially the same shape; notice that it still has a jagged outline. How does smooth shading work? We specify a different normal for each vertex, one that's equal to the "real" normal that a sphere would have at that point. Then, we tell OpenGL to apply smooth shading. When it draws a triangle, it takes a weighted average of the normals at the vertices to determine the normals at different points on the triangle. In this way, we can draw much betterlooking shapes in a given amount of time, since smooth shading is fast on graphics cards, much faster than increasing the number of polygons. As you can see, smooth shading is a very powerful tool. Now, let's say that our four walls were meant to approximate a circle. Of course, that's a pretty bad approximation (unless you're drunk or something), but we'll do what we can. First, uncomment the line glShadeModel(GL_SMOOTH) in the initRendering function, to enable smooth shading. (If we ever want to disable smooth shading, we can call glShadeModel(GL_FLAT).) Uncomment the calls to glNormal3f in drawScene and comment out the calls to glNormal3f that used to be there. 21

//Front //glNormal3f(0.0f, 0.0f, 1.0f); glNormal3f(-1.0f, 0.0f, 1.0f); glVertex3f(-1.5f, -1.0f, 1.5f); glNormal3f(1.0f, 0.0f, 1.0f); glVertex3f(1.5f, -1.0f, 1.5f); glNormal3f(1.0f, 0.0f, 1.0f); glVertex3f(1.5f, 1.0f, 1.5f); glNormal3f(-1.0f, 0.0f, 1.0f); glVertex3f(-1.5f, 1.0f, 1.5f); //Right //glNormal3f(1.0f, 0.0f, 0.0f); glNormal3f(1.0f, 0.0f, -1.0f); glVertex3f(1.5f, -1.0f, -1.5f); glNormal3f(1.0f, 0.0f, -1.0f); glVertex3f(1.5f, 1.0f, -1.5f); glNormal3f(1.0f, 0.0f, 1.0f); glVertex3f(1.5f, 1.0f, 1.5f); glNormal3f(1.0f, 0.0f, 1.0f); glVertex3f(1.5f, -1.0f, 1.5f); //Back //glNormal3f(0.0f, 0.0f, -1.0f); glNormal3f(-1.0f, 0.0f, -1.0f); glVertex3f(-1.5f, -1.0f, -1.5f); glNormal3f(-1.0f, 0.0f, -1.0f); glVertex3f(-1.5f, 1.0f, -1.5f); glNormal3f(1.0f, 0.0f, -1.0f); glVertex3f(1.5f, 1.0f, -1.5f); glNormal3f(1.0f, 0.0f, -1.0f); glVertex3f(1.5f, -1.0f, -1.5f); //Left //glNormal3f(-1.0f, 0.0f, 0.0f); glNormal3f(-1.0f, 0.0f, -1.0f); glVertex3f(-1.5f, -1.0f, -1.5f); glNormal3f(-1.0f, 0.0f, 1.0f); glVertex3f(-1.5f, -1.0f, 1.5f); glNormal3f(-1.0f, 0.0f, 1.0f); glVertex3f(-1.5f, 1.0f, 1.5f); glNormal3f(-1.0f, 0.0f, -1.0f); glVertex3f(-1.5f, 1.0f, -1.5f); glEnd();

This makes the normals at each vertex equal to the "real" normals of the circle we are trying to approximate. Notice that we always call glNormal3f right before calling glVertex3f for the vertex whose normal we are indicating. Now, our program looks like this:

22

Download the source code, compile the program, and run it. Exercises Make the 'l' key toggle whether lighting is on. (You'll need to check whether key == 'l'.) Using timers, but without adding another timer, add a third light, green in color, that moves back and forth between (-6, 0, 0) and (6, 0, 0).

Lesson 5: Textures
A lot of the time, we'll want to put pictures, or "textures", on our 3D polygons, instead of just color. There are two main reasons for this. The first one is more obvious; we might want to give the shape some detailed appearance. For example, it's much easier to make a person's face if we take a picture of a face and apply the texture to a few polygons in the face, than if we have a million tiny colored faces. The second reason is that we might want to approximate the lit appearance of some small feature, without adding tons of polygons. For example, using textures, we can make a golf ball look like it has dimples, as in the below picture:

Okay, so it looks a little cruddy, but it gives you the idea. The above picture has relatively few polygons, as you can observe if you look at its outline. It gives the appearance of having dimples by applying textures to its faces. Rather than using textures, we could just add tons of polygons to the 23

figure, but there this would slow down drawing, and it would be a lot of work to figure out the extra points and faces that we want to add. The downside to using textures to simulate small features is that it doesn't respond correctly to lighting. For example, if we shine light nearly parallel to a given face on the golf ball, one side of the dimple should be light and the other should be dark, but this won't happen if we're using textures. Still, it's better than nothing, and it has the advantages I mentioned over adding extra polygons. Let's get down to some code. First, take a look at what our finished program will look like.

To make a figure like this, the first thing we have to do is load an image with the texture we want. We want to take a picture file and get it into an array of characters (R1, G1, B1, R2, G2, B2, ...) indicating the color of each pixel in the image. Each component ranges from 0 to 255. Our array will start with the lower-left pixel, then progress to the right end of the row, then move upward to the next column, and so on. This is the format in which OpenGL likes our images. I've written a loadBMP function to load bitmap images for us. Bitmaps take up a lot of space compared to image formats, like .PNG, but I chose to use bitmaps because they are relatively easy to load into format that we need. The loadBMP function isn't all that long or complicated (other than the memory management stuff I'm doing to make sure that the program doesn't leak memory). I made it using information about the bitmap file format on Wikipedia. At any rate, you don't have to know how it works; all you have to know is what it does. Take a look at imageloader.h. This gives us the basic idea of what loadBMP does. (The actual code for loadBMP is in imageloader.cpp.) Given a filename, it returns an Image object, which contains the width and height of the image, as well as the array pixels, which stores the pixels' colors in the format we want. Once we've got the image, we have to send it to OpenGL. We do this in a function we write called loadTexture.
//Makes the image into a texture, and returns the id of the texture GLuint loadTexture(Image *image) {

24

Our loadTexture function takes an Image object and returns a GLuint (which is kind of like an unsigned int) giving the id that OpenGL assigned to the texture.
GLuint textureId; glGenTextures(1, &textureId); //Make room for our texture

First, we tell OpenGL to make room for the texture, by calling glGenTextures. The first argument is the number of textures we need, and the second is an array where OpenGL will store the id's of the textures. In this case, the second argument is an "array" of size 1. By C++ magic, using &textureId as the second argument will result in having textureId store the id of our one texture.
glBindTexture(GL_TEXTURE_2D, textureId); //Tell OpenGL which texture to edit //Map the image to the texture glTexImage2D(GL_TEXTURE_2D, //Always GL_TEXTURE_2D 0, //0 for now GL_RGB, //Format OpenGL uses for image image->width, image->height, //Width and height 0, //The border of the image GL_RGB, //GL_RGB, because pixels are stored in RGB format GL_UNSIGNED_BYTE, //GL_UNSIGNED_BYTE, because pixels are stored //as unsigned numbers image->pixels); //The actual pixel data

Now, we have to assign the texture id to our image data. We call glBindTexture(GL_TEXTURE_2D, textureId) to let OpenGL know that we want to work with the texture we just created. Then, we call glTexImage2D to load the image into OpenGL. The comments explicate what each of the arguments is, although you don't really need to understand all of them. OpenGL will copy our pixel data, so after this call, we can free the memory used by the image using delete. (We don't do this here; we do it elsewhere in main.cpp.) Note that we should only use images whose widths and heights are 64, 128, or 256, since computers like powers of 2. Other sizes of images might not work properly.
return textureId; //Returns the id of the texture }

Last, but not least, we return the id of the texture. We want to load the file "vtr.bmp" as an image and make it into an OpenGL texture, so we add the following to initRendering:
Image* image = loadBMP("vtr.bmp"); _textureId = loadTexture(image);

It's pretty straightforward. We load the image, then load the texture into OpenGL. On to drawScene. We start by calling glEnable(GL_TEXTURE_2D) to enable applying textures and glBindTexture(GL_TEXTURE_2D, _textureId) to tell OpenGL that we want to use the texture with id _textureId. Now, we have to set up how we want OpenGL to map our texture. To understand what this means, we have to know a little more about texture mapping.

25

Each pixel that we draw for a textured polygon corresponds to a point on our image. For example, it might correspond to the green point in the above picture. OpenGL has to figure out what color to make the pixel. The most straightforward approach is to take the color of the nearest texel (texture pixel), light blue in this case. But this makes our texture look blocky, like in the bottom face of the screenshot of our program. You may have seen blocky textures in games when you got really close to a wall or other object; they use this type of mapiping. A better idea is to average the colors of the texels surrounding the point. In the example, we would take a weighted average of the light blue texel on which the point lies along along with the one above it, the one to its left, and the one above it and to its left. Using this method makes the image look blurry instead of blocky, which is usually better. In general, there's little reason to use the blocky mapping, in my opinion. Just for kicks, we use the blocky mapping style on the bottom face. To do this, we call glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST) and glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST). The first call tells OpenGL to use the blocky mapping (GL_NEAREST) when the texture is far away from us, and the second call tells it to use blocky mapping when the texture is close. If we wanted to use blurry mapping, we'd pass GL_LINEAR as the third parameter of these two functions. If you want to see what the program looks like when it uses the blurry mapping, comment these two lines out and uncomment the two lines beneath it. Now, we call glColor3f(1.0f, 0.2f, 0.2f) to apply a color to our texture. The call makes the image look reddish. Why are we doing this? Beats me. I just wanted to show you that you can do it. The call tells OpenGL to multiply the green and blue components of the image by 0.2. If we didn't want to color the image, we'd call glColor3f(1.0f, 1.0f, 1.0f) instead. By the way, you can even apply color blending to a texture.
glBegin(GL_QUADS); glNormal3f(0.0, 1.0f, 0.0f); glTexCoord2f(0.0f, 0.0f); glVertex3f(-2.5f, -2.5f, 2.5f); glTexCoord2f(1.0f, 0.0f); glVertex3f(2.5f, -2.5f, 2.5f); glTexCoord2f(1.0f, 1.0f); glVertex3f(2.5f, -2.5f, -2.5f); glTexCoord2f(0.0f, 1.0f); glVertex3f(-2.5f, -2.5f, -2.5f); glEnd();

Now, in addition to a normal vector, each vertex has a texture coordinate. The texture coordinates indicates to what point on the image each vertex maps. The texture coordinate (a + b, c + d), where a and c are integers, indicates the spot that is the fractional amount b above the bottom of the texture and the fractional amount d right of the left of the texture. To specify the texture coordinates of a 26

vertex, we simply call glTexCoord2f with the texture coordinates we want before calling glVertex3f for the vertex.
//Back glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glColor3f(1.0f, 1.0f, 1.0f);

For the face in the back, we want to use blurry texture mapping, so we call glTexParameteri with GL_LINEAR as the last argument. Then, we call glColor3f(1.0f, 1.0f, 1.0f) so that the image doesn't have any kind of coloring. Note that we don't have to call glBindTexture again to set the texture; OpenGL just stays with the same texture.
glBegin(GL_TRIANGLES); glNormal3f(0.0f, 0.0f, 1.0f); glTexCoord2f(0.0f, 0.0f); glVertex3f(-2.5f, -2.5f, -2.5f); glTexCoord2f(5.0f, 5.0f); glVertex3f(0.0f, 2.5f, -2.5f); glTexCoord2f(10.0f, 0.0f); glVertex3f(2.5f, -2.5f, -2.5f); glEnd();

We specify the normal, texture coordinates, and vertices of the triangle in the back. Notice that the way we have the texture coordinates set up, our image will be repeated and squished over the face of the triangle, as in the screenshot of our program.
//Left glDisable(GL_TEXTURE_2D); glColor3f(1.0f, 0.7f, 0.3f); glBegin(GL_QUADS); glNormal3f(1.0f, 0.0f, 0.0f); glVertex3f(-2.5f, -2.5f, 2.5f); glVertex3f(-2.5f, -2.5f, -2.5f); glVertex3f(-2.5f, 2.5f, -2.5f); glVertex3f(-2.5f, 2.5f, 2.5f); glEnd();

Now, we want to switch back to using colors instead of textures, so we call glDisable(GL_TEXTURE_2D) to disable textures and then make a colored face like in previous lessons. And that's the way textures work in OpenGL. Download the source code, compile the program, and run it. Exercises Make the 't' key toggle whether textures are applied to the shapes. Using timers, make the texture continuously move along the surface of the triangle. Make or find your own bitmap file to use as a texture (the width and height should both be 64, 128, or 256). Apply it to the bottom face as a second texture, without changing the texture of the triangle in the back.

27

Lesson 6: Putting It All Together


We've learned a lot so far. Let's briefly go over the OpenGL we learned in the previous lessons, to make sure we understand everything. If you want, you can skip this lesson, but you might want to solidify everything you've learned thus far. Since we love spinning objects so much, we want to make a spinning cube with two sides textured, two sides solid-colored, and two sides with a color gradient.

Let's take a look at the source code. We'll briefly go through all of the code (except for the comments at the top).
#include <stdlib.h> #ifdef __APPLE__ #include <OpenGL/OpenGL.h> #include <GLUT/glut.h> #else #include <GL/glut.h> #endif #include "imageloader.h"

Our include files.


const float BOX_SIZE = 7.0f; //The length of each side of the cube float _angle; //The rotation of the box GLuint _textureId; //The OpenGL id of the texture

BOX_SIZE is a constant storing the length of each side of the box. _angle stores the angle by which the box is currently rotated. _textureId has the id of the texture we're applying to two of the faces.
void handleKeypress(unsigned char key, int x, int y) { switch (key) {

28

case 27: //Escape key exit(0); } }

Handles keypresses. It exits the program when the user presses ESC.
//Makes the image into a texture, and returns the id of the texture GLuint loadTexture(Image *image) { GLuint textureId; glGenTextures(1, &textureId); glBindTexture(GL_TEXTURE_2D, textureId); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, image->width, image->height, 0, GL_RGB, GL_UNSIGNED_BYTE, image->pixels); return textureId; }

Our function for loading a texture from an Image object.


void initRendering() { glEnable(GL_DEPTH_TEST); glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glEnable(GL_COLOR_MATERIAL); Image *image = loadBMP("vtr.bmp"); _textureId = loadTexture(image);

Our function for initializing rendering. We enable depth testing, like always, as well as color, lighting, and light source number 0. Then, we load vtr.bmp into an Image object, load it into OpenGL as a texture, and delete the Image object, since we don't need it any more.
void handleResize(int w, int h) { glViewport(0, 0, w, h); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45.0, (float)w / (float)h, 1.0, 200.0); }

Our function for handling window resizes. It doesn't change very much in our different programs.
void drawScene() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

drawScene is our function for drawing the 3D scene. First, we call glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) to clear information from the last draw, like we always do.
glMatrixMode(GL_MODELVIEW); glLoadIdentity();

We switch to the "normal" transformation mode, and reset transformations so that we are at the origin and are facing in the negative z direction. 29

glTranslatef(0.0f, 0.0f, -20.0f);

We move forward 20 units, so that our cube will be 20 units in front of the camera.
GLfloat ambientLight[] = {0.3f, 0.3f, 0.3f, 1.0f}; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientLight);

We set the scene's ambient light, which shines everywhere, to have an intensity of 0.3.
GLfloat lightColor[] GLfloat lightPos[] = glLightfv(GL_LIGHT0, glLightfv(GL_LIGHT0, = {0.7f, 0.7f, 0.7f, 1.0f}; {-2 * BOX_SIZE, BOX_SIZE, 4 * BOX_SIZE, 1.0f}; GL_DIFFUSE, lightColor); GL_POSITION, lightPos);

We set up a light source with an intensity of 0.7 at (-2 * BOX_SIZE, BOX_SIZE, 4 * BOX_SIZE), relative to the center of the cube.
glRotatef(-_angle, 1.0f, 1.0f, 0.0f);

We rotate by the current angle about the vector (1, 1, 0), in order to produce the cube's spinning motion.
glBegin(GL_QUADS); //Top face glColor3f(1.0f, 1.0f, 0.0f); glNormal3f(0.0, 1.0f, 0.0f); glVertex3f(-BOX_SIZE / 2, BOX_SIZE / 2, -BOX_SIZE / 2); glVertex3f(-BOX_SIZE / 2, BOX_SIZE / 2, BOX_SIZE / 2); glVertex3f(BOX_SIZE / 2, BOX_SIZE / 2, BOX_SIZE / 2); glVertex3f(BOX_SIZE / 2, BOX_SIZE / 2, -BOX_SIZE / 2); //Bottom face glColor3f(1.0f, 0.0f, 1.0f); glNormal3f(0.0, -1.0f, 0.0f); glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, -BOX_SIZE / 2); glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, -BOX_SIZE / 2); glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, BOX_SIZE / 2); glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, BOX_SIZE / 2);

We draw the top and bottom faces, which are solid-colored. Before giving the coordinates of each face using glVertex3f, we specify their colors and their normals, which have magnitude 1.
//Left face glNormal3f(-1.0, 0.0f, 0.0f); glColor3f(0.0f, 1.0f, 1.0f); glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, -BOX_SIZE / 2); glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, BOX_SIZE / 2); glColor3f(0.0f, 0.0f, 1.0f); glVertex3f(-BOX_SIZE / 2, BOX_SIZE / 2, BOX_SIZE / 2); glVertex3f(-BOX_SIZE / 2, BOX_SIZE / 2, -BOX_SIZE / 2); //Right face glNormal3f(1.0, 0.0f, glColor3f(1.0f, 0.0f, glVertex3f(BOX_SIZE / glVertex3f(BOX_SIZE / glColor3f(0.0f, 1.0f, glVertex3f(BOX_SIZE / glVertex3f(BOX_SIZE / 0.0f); 0.0f); 2, -BOX_SIZE / 2, -BOX_SIZE / 2); 2, BOX_SIZE / 2, -BOX_SIZE / 2); 0.0f); 2, BOX_SIZE / 2, BOX_SIZE / 2); 2, -BOX_SIZE / 2, BOX_SIZE / 2);

30

We draw the left and right faces. For each face, we specify the normals first. Then, we set the current color to the first color of the gradient, and assign this color to the first two vertices by immediately calling glVertex3f twice. Then, we change the current color to the second color of the gradient, and assign this color to the other two vertices by subsequently calling glVertex3f two more times.
glEnd(); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, _textureId); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glColor3f(1.0f, 1.0f, 1.0f);

Now, we want to apply our texture. We call glEnd to stop drawing quadrilaterals, because some texture functions can't be called in a glBegin-glEnd block. We call glEnable(GL_TEXTURE_2D) to enable OpenGL to apply textures to subsequent polygons. We call glBindTexture(GL_TEXTURE_2D, _textureId) to tell OpenGL that we want to apply the texture with id _textureId. We call glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR) ande glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR) to have OpenGL use blurry, rather than blocky, texture mapping. Then, we call glColor3f(1.0f, 1.0f, 1.0f) so that OpenGL won't try to change the color of our texture.
glBegin(GL_QUADS); //Front face glNormal3f(0.0, 0.0f, 1.0f); glTexCoord2f(0.0f, 0.0f); glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, BOX_SIZE / 2); glTexCoord2f(1.0f, 0.0f); glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, BOX_SIZE / 2); glTexCoord2f(1.0f, 1.0f); glVertex3f(BOX_SIZE / 2, BOX_SIZE / 2, BOX_SIZE / 2); glTexCoord2f(0.0f, 1.0f); glVertex3f(-BOX_SIZE / 2, BOX_SIZE / 2, BOX_SIZE / 2); //Back face glNormal3f(0.0, 0.0f, -1.0f); glTexCoord2f(0.0f, 0.0f); glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, -BOX_SIZE / 2); glTexCoord2f(1.0f, 0.0f); glVertex3f(-BOX_SIZE / 2, BOX_SIZE / 2, -BOX_SIZE / 2); glTexCoord2f(1.0f, 1.0f); glVertex3f(BOX_SIZE / 2, BOX_SIZE / 2, -BOX_SIZE / 2); glTexCoord2f(0.0f, 1.0f); glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, -BOX_SIZE / 2); glEnd();

We draw the last two faces. For each, we specify the normal and then alternate between specifying the texture coordinates of a vertex and the actual coordinates of a vertex.
glDisable(GL_TEXTURE_2D);

Now, we're done drawing textures, so we disable them. That way, the next time we draw something, OpenGL won't automatically apply our texture. 31

glutSwapBuffers();

We send the scene to the window.


//Called every 25 milliseconds void update(int value) { _angle += 1.0f; if (_angle > 360) { _angle -= 360; }

Here's the update function, which we're going to have GLUT call every 25 milliseconds. First, we increase the angle by 1 and, to try to keep the _angle variable low, we decrease it by 360 if it's greater than 360.
glutPostRedisplay();

We tell GLUT that our scene has changed, and it should be redrawn.
} glutTimerFunc(25, update, 0);

We tell GLUT to call update again in 25 milliseconds.


int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH); glutInitWindowSize(400, 400);

We initialize GLUT.
glutCreateWindow("Putting it All Together - videotutorialsrock.com");

We tell GLUT to create our window.


initRendering();

We call our initRendering function to initialize some OpenGL rendering stuff.


glutDisplayFunc(drawScene); glutKeyboardFunc(handleKeypress); glutReshapeFunc(handleResize);

We tell GLUT what functions to use to draw our scene, handle key presses, and resize the window.
glutTimerFunc(25, update, 0);

We tell GLUT to call our timer function in 25 milliseconds.


glutMainLoop(); return 0; }

We tell GLUT to start doing everything. Download the source code, compile the program, and run it. Exercises Make the light move in a circle around the cube, in addition to having the cube rotate. Make 32

the rotation rate and angle different than that of the cube. Instead of one large cube, display two smaller cubes side by side, identical in appearance to the original. Make them rotate continuously about their centers, but at different angles and at different speeds. Change the cube into your favorite 3D solid (other than a cube, you cheater). Make sure at least one face is textured and at least one is colored. You can turn off the lighting if you want, so you don't have to worry about getting the normals right.

Part 2: Topics in 3D Programming


Lesson 7: Terrain
One common feature in games and other 3D programs is the presence of a 3D terrain. In this lesson, we will be making the following terrain:

The question is, how do we represent a terrain? The most straightforward approach, and, as it turns out, one of the best approaches, is to make a 2D grid in the x-z plane and store the height of the terrain at each grid point. This doesn't let us make every terrain; for example, we can't have a purely vertical wall or a wall that is slanted "backwards". But still, we can do a lot. We could hard code every height into the program itself. But it's better to store the heights in a separate file. The most straightforward type of file we can use is a grayscale image, where white represents the maximum allowable height and black represents the minimum allowable height. Such an image file is called a "heightmap". This also turns out to be a good idea. For one, it allows us to see what our terrain looks like, even without rendering it in 3D. Below is a zoomed-in version of the heightmap for our program.

33

Let's see how our program loads and displays the terrain. You'll notice that at the top, we have #include "vec3f.h". This includes a library "Vec3f", that is a vector of three floats. You can see everything that you can do with a Vec3f by looking at vec3f.h. We'll use the Vec3f library to store normal vectors.
/* Represents a terrain, by storing a set of heights and normals at 2D locations */ typedef struct Terrain_t { int w; //Width int l; //Length float** hs; //Heights Vec3f** normals; int computedNormals; //Whether normals is up-to-date } Terrain;

Here's our terrain structure. It stores a width and length, indicating the number of grid points in the x and z directions respectively. It stores all of the heights and the normals at each point using twodimensional arrays. Finally, it has a flag computedNormals that tells us whether the normals array actually has the correct normals. We'll want to first set all of the heights and then compute all of the normals at once, so the normals may not yet have been computed.
void Terrain_init (Terrain *t, int w2, int l2) { int i; t->w = w2; t->l = l2; t->hs = (float**) malloc (sizeof (float*) * t->l); for(i = 0; i < t->l; i++) { t->hs[i] = (float*) malloc (sizeof (float) * t->w); } t->normals = (Vec3f**) malloc ( sizeof (Vec3f*) * t->l); for(i = 0; i < t->l; i++) { t->normals[i] = (Vec3f*) malloc (sizeof (Vec3f) * t->w); }

34

t->computedNormals = 0; // false }

Here's the function that initializes the Terrain structure. It initializes all of our variables. These functions allow us to set and get the height of the terrain at a particular grid point.
/* Computes the normals, if they haven't been computed yet */ void Terrain_computeNormals(Terrain *t) { /*... }

This function computes the normal at each point. We'll come back to it.
/* Returns the normal at (x, z)*/ Vec3f* Terrain_getNormal(Terrain *t, int x, int z) { if (!t->computedNormals) { Terrain_computeNormals (t); } return &(t->normals[z][x]);

Here, we have a function that returns the normal at some point.


//Loads a terrain from a heightmap. The heights of the terrain range from //-height / 2 to height / 2. Terrain* loadTerrain(char* filename, float height) { int x,y; Image* image = loadBMP(filename); Terrain* t = (Terrain*) malloc (sizeof (Terrain)); Terrain_init (t, image->width, image->height); for(y = 0; y < image->height; y++) { for(x = 0; x < image->width; x++) { unsigned char color = (unsigned char)image->pixels[3 * (y * image->width + x)]; float h = height * ((color / 255.0f) - 0.5f); Terrain_setHeight(t, x, y, h); } } Terrain_computeNormals (t); return t; }

Here's our function for loading a terrain from an image file. First, we call our trusty ol' loadBMP function to load the bitmap from file. Then' we go through the pixels of the array and use them to set the heights of the terrain. A color of 0 corresponds to a height of -height / 2, and a color of 255 corresponds to a height of height / 2. It doesn't matter which color component we use; I used the red component for no particular reason. Then, we delete the image and force the terrain to compute all of the normals. 35

Now let's skip down to partway into drawScene.


float scale = 5.0f / max(_terrain->width() - 1, _terrain->length() - 1); glScalef(scale, scale, scale); glTranslatef(-float(_terrain->width()) / 2, 0.0f, -float(_terrain->length()) / 2);

We scale our terrain, so that it is at most 5 units wide and 5 units long. Then, we translate it so it's centered.
glColor3f(0.3f, 0.9f, 0.0f); for(int z = 0; z < _terrain->length() - 1; z++) { //Makes OpenGL draw a triangle at every three consecutive vertices glBegin(GL_TRIANGLE_STRIP); for(int x = 0; x < _terrain->width(); x++) { Vec3f normal = _terrain->getNormal(x, z); glNormal3f(normal[0], normal[1], normal[2]); glVertex3f(x, _terrain->getHeight(x, z), z); normal = _terrain->getNormal(x, z + 1); glNormal3f(normal[0], normal[1], normal[2]); glVertex3f(x, _terrain->getHeight(x, z + 1), z + 1); } glEnd(); }

Here, we draw the terrain. GL_TRIANGLE_STRIP is new. It makes OpenGL draw a triangle at every three consecutive vertices that you indicate. If your vertices are v1, v2, v3, ..., then OpenGL will draw the triangles (v1, v2, v3), (v2, v3, v4), (v3, v4, v5), .... To draw the terrain, for each z, we do a triangle strip with vertices (0, h1, z), (0, h2, z + 1), (1, h3, z), (1, h4, z + 1), (2, h5, z), (2, h6, z + 1), .... Using triangle strips is not only more convenient than using triangles; it's faster, as there are fewer 3D vertices to send to the graphics card. So, our terrain is drawn as shown below:

The way we draw the terrain, each cell in the x-z grid is carved up into two triangles, using the diagonal going out and to the right. We could have used the other diagonal to carve each cell, but it doesn't matter too much if our terrain is "smooth enough". We also could have used GL_QUADS instead, but that's not such a good idea when the four vertices aren't in the same plane.
int main(int argc, char** argv) { //... _terrain = loadTerrain("heightmap.bmp", 20); //... }

36

In our main function, we call loadTerrain to load the 3D terrain. Now let's go back and see how we computed our normals.
//Computes the normals, if they haven't been computed yet void Terrain_computeNormals(Terrain *t) { if (t->computedNormals) { return; } int i,z,x; /* Compute the rough version of the normals */ Vec3f** normals2; normals2 = (Vec3f**) malloc (sizeof (Vec3f*) * t->l); for(i = 0; i < t->l; i++) { normals2[i] = (Vec3f*) malloc (sizeof (Vec3f) * t->w); } Vec3f sum, out, in, left, right; Vec3f *tmp; for(z = 0; z < t->l; z++) { for(x = 0; x < t->w; x++) { Vec3f_Init (&sum, 0.0f, 0.0f, 0.0f); if (z > 0) { Vec3f_Init (&out, 0.0f, t->hs[z - 1][x] - t->hs[z][x], -1.0f); } if (z < t->l - 1) { Vec3f_Init(&in,

0.0f,

t->hs[z

1][x]

t->hs[z][x],

1.0f);

} if (x > 0) { Vec3f_Init (&left, -1.0f, t->hs[z][x - 1] - t->hs[z][x], } if (x < t->w - 1) { Vec3f_Init(&right,1.0f, t->hs[z][x + 1] - t->hs[z][x], } if (x > 0 && z > 0) { tmp = Vec3f_cross(&out, &left); tmp = Vec3f_normalize (tmp); sum.v[0] += tmp->v[0]; sum.v[1] += tmp->v[1]; sum.v[2] += tmp->v[2]; print (&sum);

0.0f);

0.0f);

// }

if (x > 0 && z < t->l - 1) { //sum += left.cross(in).normalize(); tmp = Vec3f_cross(&left, &in); tmp = Vec3f_normalize (tmp); sum.v[0] += tmp->v[0]; sum.v[1] += tmp->v[1];

37

//

sum.v[2] += tmp->v[2]; print (&sum);

} if (x < t->w - 1 && z < t->l - 1) { // sum += in.cross(right).normalize(); tmp = Vec3f_cross(&in, &right); tmp = Vec3f_normalize (tmp); sum.v[0] += tmp->v[0]; sum.v[1] += tmp->v[1]; sum.v[2] += tmp->v[2]; print (&sum);

// } if (x < t->w - 1 && z > 0) { //sum += right.cross(out).normalize(); tmp = Vec3f_cross(&right, &out); tmp = Vec3f_normalize (tmp); sum.v[0] += tmp->v[0]; sum.v[1] += tmp->v[1]; sum.v[2] += tmp->v[2]; print (&sum);

// }

normals2[z][x].v[0] = sum.v[0]; normals2[z][x].v[1] = sum.v[1]; normals2[z][x].v[2] = sum.v[2]; } }

First we'll compute approximate normals, and store them in the variable normals2. One way to estimate a normal at a given point is to take the vector that is perpendicular to a triangle with vertices at the point and at two points adjacent to it. For example, we could take the triangle with vertices at the point, the point immediately right of it, and the point immediately outward with respect to it, and take a vector perpendicular to that. To find the vector perpendicular to a triangle, we take the cross product of two of its edges. We compute the four edges in, out, left, and right for each point. Then, we take the cross product of a pair of edges to determine the vector perpendicular to a triangle. We do this for each of the four triangles "adjacent" to the point and take the average of the four vectors (which is just proportional to the sum). What exactly does an average of four normal vectors mean, geometrically? Absolutely nothing. It's just a way I came up with to approximate the normals. The cardinal rule of computer graphics is to do what looks right. So let's see if this weird averaging will work out in the end. Note that we have to use a bunch of if statements for points that are at the edges, since they may have fewer than four "adjacent" triangles. Okay, so we computed a bunch of normals. But it would be nice to "smooth" them out, so that each normal is more similar to adjacent normals. This way, the lighting in our 3D scene will look more smooth. This is particularly important because the heighmap only uses 64 different heights, so each height has a limited amount of precision, making the lighting look rough. To motivate us, here's a side-by-side comparison of our scene with unsmoothed and smoothed normals:

38

How exactly are we going to smooth the normals? For each normal, let's average in a little bit of the surrounding normals.
float FALLOUT_RATIO = 0.5f; for( z = 0; z < t->l; z++) { for( x = 0; x < t->w; x++) { sum.v[0] = normals2[z][x].v[0]; sum.v[1] = normals2[z][x].v[1]; sum.v[2] = normals2[z][x].v[2]; if (x > 0) { // sum += normals2[z][x - 1] * FALLOUT_RATIO; sum.v[0] += normals2[z][x-1].v[0] * FALLOUT_RATIO; sum.v[1] += normals2[z][x-1].v[1] * FALLOUT_RATIO; sum.v[2] += normals2[z][x-1].v[2] * FALLOUT_RATIO; } if (x < t->w - 1) { // sum += normals2[z][x + 1] * FALLOUT_RATIO; sum.v[0] += normals2[z][x+1].v[0] * FALLOUT_RATIO; sum.v[1] += normals2[z][x+1].v[1] * FALLOUT_RATIO; sum.v[2] += normals2[z][x+1].v[2] * FALLOUT_RATIO; } if (z > 0) { // sum += normals2[z - 1][x] * FALLOUT_RATIO; sum.v[0] += normals2[z-1][x].v[0] * FALLOUT_RATIO; sum.v[1] += normals2[z-1][x].v[1] * FALLOUT_RATIO; sum.v[2] += normals2[z-1][x].v[2] * FALLOUT_RATIO;

} if (z < t->l - 1) {

// sum += normals2[z + 1][x] * FALLOUT_RATIO; sum.v[0] += normals2[z+1][x].v[0] * FALLOUT_RATIO; sum.v[1] += normals2[z+1][x].v[1] * FALLOUT_RATIO; sum.v[2] += normals2[z+1][x].v[2] * FALLOUT_RATIO;

} }

if (Vec3f_magnitude(&sum) == 0) { Vec3f_Init (&sum, 0.0f, 1.0f, 0.0f); } t->normals[z][x].v[0] = sum.v[0]; t->normals[z][x].v[1] = sum.v[1]; t->normals[z][x].v[2] = sum.v[2];

39

So, at each normal, we point, we take a weighted average of the "rough" normal at the point and the "rough" normal at the adjacent points. Each adjacent normal gets a weight of 0.5, and the normal at the point gets a weight of 1. Again, this average has no real meaning, but it still makes the scene look good. Note that we set the normal to some arbitrary vector if the average turns out to be the zero vector. This is because we can't use the zero vector, as it's impossible to normalize, but we have to use something. The lighting in our scene looks pretty good. Mission accomplished. Now you know how to make a nice-looking 3D terrain. Download the source code, compile the program, and run it. Exercises Smooth out the normals even more, by averaging in the diagonally adjacent normals in addition to the horizontally adjacent normal (for example, average in the normal that is one unit to the left and one unit out). Use a different weighting for these normals. Note that if we use too much smoothing, the lighting will look very smooth, but unrealistic. In the extreme case, all of our normals would be the same. Make your own 60x60 heightmap, and view it in the program. Make it a little interesting (e.g. don't use a solid color). You might make it using an image editing program like The GIMP, Inkscape, or Paint; download a heightmap online and scale it to be 60x60; or find a program designed for making heightmaps. Change the program to add together the heights of your heightmap and the heights of my heightmap and display the resulting terrain.

Lesson 8: Drawing Text


Drawing text is fairly important on occasion. There are a few possible approaches to drawing text in OpenGL. I'll outline four approaches and their pros and cons. 1. You can use bitmaps, not the kind that uses an image file, but a certain OpenGL construct that I haven't shown yet. Each character is represented as a bitmap. Each pixel in the bitmap has a bit, which is 1 if the pixel is colored and 0 if it is transparent. Each frame, you'd send the bitmaps for the characters to the graphics card. The graphics card would then bypass the usual 3D transformations and just draw the pixels right on the top of the window. I'm not a fan of this approach. It's slow, as you have to send each bitmap to the graphics card each frame, which is a lot of data. The method is also inflexible; you can't scale or transform the characters very well. You can do it in GLUT using glutBitmapCharacter, whose documentation is at this site. But again, there are a lot of disadvantages to the technique. 2. You can represent characters using textures. Each character would correspond to a certain part of some texture, with some of the pixels in the texture white and the rest transparent (which I haven't shown how to do yet). You would draw a quadrilateral for each character and map the appropriate part of the appropriate texture to it. This approach is alright; it gives you some flexibility as to how and where you draw characters in 3D. It's also pretty fast. But the characters wouldn't scale too well; they'll look pixelated if you zoom in too far. 3. You can draw a bunch of lines in 3D, using GL_LINES (which I also haven't shown yet, although you can probably guess how GL_LINES works). This technique is fast and does allow scaling and otherwise transforming characters. However, the characters would look better if they covered an area rather than a perimeter. Also, it's fairly tedious to figure out a set of lines to represent each character. You can draw outlined text in GLUT using glutStrokeCharacter, whose documentation is at this site. 40

4. You can draw a bunch of polygons in 3D. This technique also allows us to transform characters well. It even lets us give the characters 3D depth, so that they look 3D rather than flat. However, it's slower than drawing lines and using textures. Also, it's even more annoying to figure out how to describe each character as a set of polygons than it is to figure out how to describe one as a set of lines. Of the four techniques presented for drawing text in OpenGL, we'll be using the last one. I've already done most of the work for it. Using the open-source 3D modeling program Blender, I used the "add text" feature to come up with a 3D model for each of the 95 printable ASCII characters. I used the decimate tool to reduce their polygon count, so that they average around 40 polygons each. I gave each character some 3D depth, saved each character to a separate file, and used a program to load the files and output them into a special file format I designed. Then, I wrote code to load the models from the file and display them using handy t3dDraw2D and t3dDraw3D functions, which I will describe later. Details aside, the basic idea is that there is a file with all of the positions of the 3D polygons for the different characters. The t3dDraw2D and t3dDraw3D functions take care of drawing the appropriate triangles. The functions themselves use some OpenGL techniques I haven't show yet, in order to make them draw as quickly as possible. How fast are these functions? The 2D drawing function draws about 40 triangles per character. Graphics cards can draw millions of triangles per second. So, we would expect the function to be able to draw about 1,000,000 / 40 = 25,000 characters per second, which is about what I observed in a little test I rigged up. So, if you're not drawing tons of characters, it should be sufficient, but if you are, you might want to switch to using glutStrokeCharacter to draw lines rather than polygons. We're going to put 3D text on each of the four sides of a square, so that our program will look like this:

Let's look at the source.


//Computes a scaling value so that the strings float computeScale( char* strs[4]) { float maxWidth = 0;

41

int i; printf (" begin compute scale \n"); for(i = 0; i < 4; i++) { float width = t3dDrawWidth(strs[i]); if (width > maxWidth) { maxWidth = width; } } printf (" end compute scale %f \n",maxWidth ); return 2.6f / maxWidth;

Each side of the square will have a length of 3. We want the longest string to take up 2.6 units on the square, so we use a computeScale function to determine the factor by which we should scale the text. We go through each of the four strings, use t3dDrawWidth to determine the draw width of the strings as a multiple of the height of the font. We take 2.6 divided by the maximum width to be our scaling factor.
//The four strings that are drawn char* STRS[4] = {"Video", "Tutorials", "Rock", ".com"};

The array STRS contains the strings that we will draw.


void initRendering() { //... t3dInit(); }

In our initRendering function, we have to set up some stuff for drawing 3D text. Namely, we have to load the positions of the triangles for each character from the file "charset". So we call t3DInit(), which is also from text3d.h.
void drawScene() { //... //Draw the strings along the sides of a square glScalef(_scale, _scale, _scale); glColor3f(0.3f, 1.0f, 0.3f); for(int i = 0; i < 4; i++) { glPushMatrix(); glRotatef(90 * i, 0, 1, 0); glTranslatef(0, 0, 1.5f / _scale); t3dDraw3D(STRS[i], 0, 0, 0.2f, 1.5f); glPopMatrix(); } } //...

Here's where we draw the 3D text. First, we scale by the appropriate factor. Then, for each string, we move to the appropriate side of the cube and use t3dDraw3D to draw the string. The t3dDraw3D has five parameters.
void t3dDraw3D(char *str, int hAlign, int vAlign, float depth, float lineHeight );

42

The first parameter is the string to draw. The second is the horizontal alignment of the string; a negative number is a left alignment, 0 is a centered alignment, and a positive number is a right alignment. The third parameter is the vertical alignment of the string; a negative number is a top alignment, 0 is a centered alignment, and a positive number is a bottom alignment. (You could draw text with multiple lines if you wanted to, using newline characters.) The fourth parameter is the 3D depth of the character, as a multiple of the height of the font. The fifth parameter is the height of each line, as a multiple of the height of the font. It could be used to indicate the spacing between lines, if we were drawing text with multiple lines. But we're not, so we'll just use the default value of 1.5. If you want to draw text without depth, where all of the polygons are in the same plane, you could call t3dDraw2D, which has the same parameters, except that it omits the depth of the text (since there is no depth). This is faster, since there are fewer polygons to draw, but it doesn't give us the nice-looking 3D text.
int main(int argc, char** argv) { //... _scale = computeScale(STRS); //... }

Finally, in our main function, we compute the factor by which we're scaling the text by calling the computeScale function we saw earlier. There we have it! We've made some 3D text in OpenGL. Download the source code, compile the program, and run it. Exercises Change the program to have one spinning character, which starts as the lowercase letter 'a' and changes to whatever letter the user presses. Use the GLUT stroke functions (glutStrokeCharacter and glutStrokeWidth) rather than t3dDraw3D to draw it. You'll want to use the online documentation for GLUT to figure out how the functions work. Change the program from a spinning square to a spinning octagon, where the faces have the letters of an eight-letter word (such as "spinning"). Use 2D text drawing (t3dDraw2D) rather than 3D text drawing.

Lesson 9: Animation
3D animation is a nice thing to have in our programs. There are a couple of ways to do 3D animation. We'll do animation using frames. We'll have an external file that stores the positions of certain vertices in our model at particular times in a loop of animation. To draw the model at a particular time, we'll take the two frames nearest to the particular time and take a weighted average of the vertices' positions; that is, we'll interpolate between the two frames. There are more flexible approaches to animation, notably, skeletal animation. But we'll stick with the more straightforward approach. This lesson will be more complicated than previous lessons. There are a bunch of file formats for representing 3D animations. We'll use MD2, the Quake 2 file format. Quake 2 may be old, but we'll use MD2 because the file format is open and straightforward and there are a bunch of MD2 files online that other people have made. Another reason we're using MD2 is because Blender, an open-source 3D modeling program, is able 43

to export to MD2. Professionals normally use 3ds Max or Maya for 3D modeling. But those programs cost money, so we will use Blender. Using Blender, we made the 3D guy for our program, including a texture for him. Our program has the guy walking, as shown below:

This is what editing the guy in Blender looks like:

So now that we've made an MD2 animation of our guy, we'll have to load it in and animate it. I looked online for the MD2 file format, so that I could figure out how to do that. In the rest of this lesson, we'll see how exactly the MD2 file format works. We'll put all of the code specific to MD2 files in the md2model.h and md2model.c files. We'll have an MD2Model structure that stores all of the information about an animation and takes care of drawing the animation. Let's look at the md2model.h file to see what the structures looks like:
typedef struct MD2Vertex_t { Vec3f pos; Vec3f normal; } MD2Vertex; typedef struct MD2Frame_t { char name[16]; MD2Vertex* vertices; } MD2Frame; typedef struct MD2TexCoord_t {

44

float texCoordX; float texCoordY; } MD2TexCoord;

First, we have a few structures that our MD2Model will use. We have vertices, frames, texture coordinates, and triangles. Each frame has a name, which usually indicates the type of animation in which it is (e.g. "run", "stand"). The frames just store the positions and normals of each of the vertices using a vertices array. Each frame has the same number of vertices, so that the vertex at index 5, frame 1, for instance, represents the same part of the model as the vertex at index 5, frame 2, but is in an different position. A triangle is defined by the indices of the vertices in the frames' vertices arrays, and the indices of the texture coordinates in an array that will appear in the MD2Model structure.
typedef struct MD2Model_t { MD2Frame* frames; int numFrames; MD2TexCoord* texCoords; MD2Triangle* triangles; int numTriangles;

Here are the main fields that we'll need to draw the model. We have an array of frames, texture coordinates, and triangles.
GLuint textureId;

Here, we have the id of the texture for the figure that we'll animate.
int startFrame; //The first frame of the current animation int endFrame; //The last frame of the current animation

These are the starting and ending frames to use for animation.
/* The position in the current animation. 0 indicates the beginning of * the animation, which is at the starting frame, and 1 indicates the * end of the animation, which is right when the starting frame is * reached again. It always lies between 0 and 1. */ float time;

Er, just read the comments.


void MD2Model_init (MD2Model *p); void MD2Model_clean (MD2Model *p);

Here are our init and clean functions.


//Switches to the given animation void MD2Model_setAnimation(MD2Model *p, const char* name);

This function will let us set the current animation, since the MD2 file can actually store several animations in certain ranges of frames. Our animation, for example, will occupy frames 40 to 45. Each frame has a name, which will enable us to identify the appropriate frames a the given animation string, as we'll see later.
//Advances the position in the current animation. //lasts one unit of time. void MD2Model_advance(MD2Model *p, float dt); The entire animation

45

This function will be used to advance the state animation. By repeatedly calling advance, we'll animate through the different positions of the 3D figure.
//Draws the current state of the animated model. void MD2Model_draw(MD2Model *p);

This function takes care of actually drawing the 3D model.


//Loads an MD2Model from the specified file. Returns NULL if there was //an error loading it. MD2Model* MD2Model_load( const char* filename); };

The load function will load a given MD2 file. I That's the md2model.h file. Now let's look at md2model.c.
//Normals used in the MD2 file format float NORMALS[486] = {-0.525731f, 0.000000f, 0.850651f, -0.442863f, 0.238856f, 0.864188f, //... -0.688191f, -0.587785f, -0.425325f};

Rather than storing normals directly, MD2 has 162 special normals and only gives the indices of the normals. This array contains all of the normals that MD2 uses. When we load in the file, we're going to have to worry about a little thing called "endianness". When designing CPUs, the designers had to decide whether to store numbers with their most significant byte first or last. For example, the short integer 258 = 1(256) + 2 might be stored with the bytes (1, 2), with the most significant byte first, or with the bytes (2, 1), with the least significant byte first. The first means of storage is called "big-endian"; the second is called "littleendian". So, the people who designed CPUs, in their infinite wisdom, chose both. Some CPUs, including the Pentium, store numbers in little-endian form, and other CPUs store numbers in bigendian form. Stupid as it seems, which endianness is "better" has been the source of flame wars. So, we're stuck with both of them, a problem which has been annoying computer programmers for ages past. What does this have to do with anything? The problem comes up when an integer that requires multiple bytes is stored in the MD2 file. It is stored in little-endian form. But the computer on which we load the file might not use little-endian form. So when we load the file, we have to write our code carefully to make sure that the endianness of the computer on which the program is running doesn't matter.
//Returns whether the system is little-endian short littleEndian() { //The short value 1 has bytes (1, 0) in little-endian and (0, 1) in //big-endian short s = 1; return (((char*)&s)[0]) == 1; }

This function will check whether we are on a little-endian or big-endian system. If the first byte of the short integer 1 is a 1, then we're on a little-endian machine; otherwise, we're on a big-endian machine.
//Converts a four-character array to an integer, using little-endian form int toInt(const char* bytes) {

46

return (int)(((unsigned char)bytes[3] << 24) | ((unsigned char)bytes[2] << 16) | ((unsigned char)bytes[1] << 8) | (unsigned char)bytes[0]); } //Converts a two-character array to a short, using little-endian form short toShort(const char* bytes) { return (short)(((unsigned char)bytes[1] << 8) | (unsigned char)bytes[0]); } //Converts a two-character array to an unsigned short, using little-endian //form unsigned short toUShort(const char* bytes) { return (unsigned short)(((unsigned char)bytes[1] << 8) | (unsigned char)bytes[0]); }

Here, we have functions that will convert a sequence of bytes into an int, a short, or an unsigned short. They use the << bitshift operator, which basically just shoves some number of 0 bits into the end of the number. For example, the binary number 1001101 bit shifted by 5 is 100110100000. Any "extra" bits at the front are just removed. Note that the functions work regardless of the endianness of the machine on which the program is running.
//Converts a four-character array to a float, using little-endian form float toFloat(const char* bytes) { float f; if (littleEndian()) { ((char*)&f)[0] = bytes[0]; ((char*)&f)[1] = bytes[1]; ((char*)&f)[2] = bytes[2]; ((char*)&f)[3] = bytes[3]; } else { ((char*)&f)[0] = bytes[3]; ((char*)&f)[1] = bytes[2]; ((char*)&f)[2] = bytes[1]; ((char*)&f)[3] = bytes[0]; } return f; }

Not even floats are immune from the endianness issue. To convert four bytes into a float, we check whether we're on a little-endian machine and then set each byte of the float f as appropriate.
//Reads the next four bytes as an integer, using little-endian form int readInt(FILE *input) { char buffer[4]; fread(buffer, 1, 4, input); return toInt(buffer); } //Reads the next two bytes as a short, using little-endian form short readShort(FILE *input) { char buffer[2]; fread(buffer, 1, 2, input); return toShort(buffer); }

47

//Reads the next two bytes as an unsigned short, using little-endian form unsigned short readUShort(FILE *input) { char buffer[2]; fread(buffer, 1, 2, input); return toUShort(buffer); } //Reads the next four bytes as a float, using little-endian form float readFloat(FILE *input) { char buffer[4]; fread(buffer, 1, 4, input); return toFloat(buffer); } //Calls readFloat three times and returns the results as a Vec3f object Vec3f readVec3f(FILE *input) { float x = readFloat(input); float y = readFloat(input); float z = readFloat(input); Vec3f tmp; Vec3f_Init (&tmp, x,y,z); } return tmp;

These functions make it convenient to read the next few bytes from a file as an int, short, unsigned short, float, or Vec3f.
//Makes the image into a texture, and returns the id of the texture GLuint loadTexture(Image *image) { //... }

Here's our loadTexture function from the lesson on textures.


//Loads the MD2 model MD2Model* MD2Model_load( const char* filename) { int i,j; FILE *input; input = fopen (filename, "rb"); char buffer[64]; fread(buffer, 1, 4, input); //Should be "IPD2", if this is an MD2 file if (buffer[0] != 'I' || buffer[1] != 'D' || buffer[2] != 'P' || buffer[3] != '2') { return NULL; } if (readInt(input) != 8) { //The version number return NULL; }

Here's the method that loads in an MD2 file. First, we check that the first four bytes of the file are "IPD2", which must be the first four bytes of every MD2 file. Then, we check that the next four 48

bytes, interpreted as an integer, are the number 8, which they must be for the MD2 files that we're loading.
int textureWidth = readInt(input); int textureHeight = readInt(input); readInt(input); int numTextures = readInt(input); if (numTextures != 1) { return NULL; } int numVertices = readInt(input); int numTexCoords = readInt(input); int numTriangles = readInt(input); readInt(input); int numFrames = readInt(input); //The //The //The //The width of the textures height of the textures number of bytes per frame number of textures

//The //The //The //The //The

number number number number number

of of of of of

vertices texture coordinates triangles OpenGL commands frames

The MD2 file format dictates that next in the file, there should be certain information about the animation in a certain order. We read in this information and store it into variables. Some of the information we don't need, so we don't store it anywhere.
//Offsets (number of bytes after the //of where certain data appear) int textureOffset = readInt(input); int texCoordOffset = readInt(input); int triangleOffset = readInt(input); int frameOffset = readInt(input); readInt(input); readInt(input); beginning of the file to the beginning //The //The //The //The //The //The offset offset offset offset offset offset to to to to to to the the the the the the textures texture coordinates triangles frames OpenGL commands end of the file

Next in the MD2 file should be certain values indicating the number of bytes from the beginning of the file where certain data appear.
//Load the texture fseek (input,textureOffset, SEEK_SET ); fread(buffer, 1, 64, input); if (strlen(buffer) < 5 || strcmp(buffer + strlen(buffer) - 4, ".bmp") != 0) { return NULL; } Image* image = loadBMP(buffer); GLuint textureId = loadTexture(image); MD2Model* model; model = (MD2Model*) malloc (sizeof (MD2Model) ); MD2Model_init (model); model->textureId = textureId;

We go to where the texture is indicated, and load in the next 64 bytes as a string. The string is a filename where the texture for the model is. We make sure that the texture is a bitmap and load it in.
//Load the texture coordinates fseek (input,texCoordOffset, SEEK_SET ); model->texCoords = (MD2TexCoord*) malloc (sizeof (MD2TexCoord) *numTexCoords ); for( i = 0; i < numTexCoords; i++) { MD2TexCoord* texCoord = model->texCoords + i; texCoord->texCoordX = (float)readShort(input) / textureWidth; texCoord->texCoordY = 1 - (float)readShort(input) / textureHeight; }

49

Next, we load in the texture coordinates. Each texture coordinate is represented as two shorts. To get from each short to the appropriate float, we have to divide by the width or height of the texture that we found at the beginning of the file. For the y coordinate, we have to take 1 minus the coordinate because the MD2 file measures the y coordinate from the top of the texture, while OpenGL measures it from the bottom of the texture.
//Load the triangles fseek (input,triangleOffset, SEEK_SET ); model->triangles = (MD2Triangle*) malloc (sizeof *numTriangles ); model->numTriangles = numTriangles; for( i = 0; i < numTriangles; i++) { MD2Triangle* triangle = model->triangles + i; for( j = 0; j < 3; j++) { triangle->vertices[j] = readUShort(input); } for( j = 0; j < 3; j++) { triangle->texCoords[j] = readUShort(input); } } //Load the frames fseek (input,frameOffset, SEEK_SET ); model->frames = (MD2Frame*) malloc (sizeof (MD2Frame) *numFrames ); model->numFrames = numFrames; for( i = 0; i < numFrames; i++) { MD2Frame* frame = model->frames + i; //frame->vertices = new MD2Vertex[numVertices]; frame->vertices = (MD2Vertex*) malloc (sizeof numVertices); Vec3f scale = readVec3f(input); Vec3f translation = readVec3f(input); fread(frame->name, 1, 16, input); for( j = 0; j < numVertices; j++) { MD2Vertex* vertex = frame->vertices + j; fread(buffer, 1, 3, input); Vec3f v; Vec3f_Init (&v,(unsigned char)buffer[0], (unsigned char)buffer[1], (unsigned char)buffer[2]); vertex->pos.v[0] = translation.v[0] + (scale.v[0] * v.v[0]); vertex->pos.v[1] = translation.v[1] + (scale.v[1] * v.v[1]); vertex->pos.v[2] = translation.v[2] + (scale.v[2] * v.v[2]); fread(buffer, 1, 1, input); int normalIndex = (int)((unsigned char)buffer[0]); vertex->normal.v[0] = NORMALS[3 * normalIndex]; vertex->normal.v[1] = NORMALS[3 * normalIndex+1]; vertex->normal.v[2] = NORMALS[3 * normalIndex+2]; } } (MD2Vertex) * (MD2Triangle)

Now, we load in the triangles, which are just a bunch of indices of vertices and texture coordinates.

50

Now, we load in the frames. Each frame starts with six floats, indicating vectors by which to scale and translate the vertices. Then, there are 16 bytes indicating the frame's name. Then come the vertices. For each vertex, we have two unsigned characters indicating the position, which we can convert to floats by scaling and translating them. Then, we have an unsigned character which gives the normal vertor as an index in the NORMALS array that we saw earlier.
model->startFrame = 0; model->endFrame = numFrames - 1; return model;

Finally, we set the starting and ending frames and return the model.
void MD2Model_setAnimation(MD2Model *p, const char* name) { /* The names of frames normally begin with the name of the animation in * which they are, e.g. "run", and are followed by a non-alphabetical * character. Normally, they indicate their frame number in the animation, * e.g. "run_1", "run_2", etc. */ int found = 0; int i; for(i = 0; i < p->numFrames; i++) { MD2Frame* frame = p->frames + i; if (strlen(frame->name) > strlen(name) && strncmp(frame->name, name, strlen(name)) == 0 && !isalpha(frame->name[strlen(name)])) { if (!found) { found = 1; p->startFrame = i; } else { p->endFrame = i; } } else if (found) { break; } } }

This function figures out the start and end frames for the indicated animation using the names of the different frames, which follow the pattern suggested by the comment.
void MD2Model_advance(MD2Model *p, float dt) { if (dt < 0) { return; } p->time += dt; if (p->time < 1000000000) { p->time -= (int)p->time; } else { p->time = 0; }

51

Now, we have a function for advancing the animation, which we do by increasing the time field. To keep it between 0 and 1, we use time -= (int)time (unless the time is REALLY big, in which case we might run into problems converting it into an integer).
void MD2Model_draw(MD2Model *p) { int i,j; glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, p->textureId); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

Here's where we draw the 3D model. We start by telling OpenGL the texture and the type of texture mapping that we want to use.
//Figure out the two frames between which we are interpolating int frameIndex1 = (int)(p->time * (p->endFrame - p->startFrame + 1)) + p>startFrame; if (frameIndex1 > p->endFrame) { frameIndex1 = p->startFrame; } int frameIndex2; if (frameIndex1 < p->endFrame) { frameIndex2 = frameIndex1 + 1; } else { frameIndex2 = p->startFrame; } MD2Frame* frame1 = p->frames + frameIndex1; MD2Frame* frame2 = p->frames + frameIndex2;

Now, using the time field, we figure out the two frames between which we want to interpolate.
//Figure out the fraction that we are between the two frames float frac = (p->time - (float)(frameIndex1 - p->startFrame) / (float)(p->endFrame - p->startFrame + 1)) * (p->endFrame >startFrame + 1);

p-

Now, we figure out what fraction we are between the two frames. 0 means that we are at the first frame, 1 means that we are at the second, and 0.5 means that we are halfway in between.
//Draw the model as an interpolation between the two frames glBegin(GL_TRIANGLES); Vec3f pos; //= v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal; // = v1->normal * (1 - frac) + v2->normal * frac; MD2Vertex *v1; MD2Vertex *v2; for( i = 0; i < p->numTriangles; i++) { MD2Triangle* triangle = p->triangles + i; for( j = 0; j < 3; j++) { v1 = frame1->vertices + triangle->vertices[j]; v2 = frame2->vertices + triangle->vertices[j]; pos.v[0] = v1->pos.v[0] * (1 - frac) + v2->pos.v[0] * frac; pos.v[1] = v1->pos.v[1] * (1 - frac) + v2->pos.v[1] * frac; pos.v[2] = v1->pos.v[2] * (1 - frac) + v2->pos.v[2] * frac;

Now, we go through the triangles, and for each vertex, take the position to be an interpolation 52

between their positions in the two frames.


normal.v[0] = v1->normal.v[0] * (1 - frac) + v2->normal.v[0] * frac; normal.v[1] = v1->normal.v[1] * (1 - frac) + v2->normal.v[1] * frac; normal.v[2] = v1->normal.v[2] * (1 - frac) + v2->normal.v[2] * frac; if (normal.v[0] == 0 && normal.v[1] == 0 && normal.v[2] == 0) { normal.v[0] = 0; normal.v[1] = 0; normal.v[2] = 1; } glNormal3f(normal.v[0], normal.v[1], normal.v[2]);

We do the same thing for the normal vectors. If the average happens to be the zero vector, we change it to an arbitrary vector, since the zero vector has no direction and can't be used as a normal vector. Actually there's a better way to average two directions, but we'll stick with a linear average because it's easier.
MD2TexCoord* texCoord = p->texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos.v[0], pos.v[1], pos.v[2]);

} } glEnd();

Now, we just find the appropriate texture coordinate and call glTexCoord2f and glVertex3f. That does it for the MD2 file format. Let's take a look at main.c.
Const float FLOOR_TEXTURE_SIZE = 15.0f; //The size of each floor "tile"

This is the size of each "tile" on the floor; that is, each copy of the floor image that you saw in the program's screenshot.
float _angle = 30.0f; MD2Model* _model; int _textureId; //The forward position of the guy relative to an arbitrary floor "tile" float _guyPos = 0;

Here are some variables that will store the camera angle, the MD2Model object, the id of the floor texture, and how far the guy has walked, modulo the size of the floor tile.
void initRendering() { //... //Load the model _model = MD2Model_load("tallguy.md2"); if (_model != NULL) { MD2Model_setAnimation(_model,"run"); } //Load the floor texture Image* image = loadBMP("vtr.bmp"); _textureId = loadTexture(image); delete image; }

53

In our initRendering function, we load the model and the floor texture.
void drawScene() { //... //Draw the guy if (_model != NULL) { glPushMatrix(); glTranslatef(0.0f, 4.5f, 0.0f); glRotatef(-90.0f, 0.0f, 0.0f, 1.0f); glScalef(0.5f, 0.5f, 0.5f); MD2Model_draw (_model); glPopMatrix(); }

Here's where we draw the guy. We have to translate, rotate, and scale to make him the right size, at the right position, and facing in the right direction. I found out the appropriate translation and scaling factor by trial and error. The correct numbers depend on the actual vertex positions that I set up when I created the model in Blender.
//Draw the floor glTranslatef(0.0f, -5.4f, 0.0f); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, _textureId); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glBegin(GL_QUADS); glNormal3f(0.0f, 1.0f, 0.0f); glTexCoord2f(2000 / FLOOR_TEXTURE_SIZE, _guyPos / FLOOR_TEXTURE_SIZE); glVertex3f(-1000.0f, 0.0f, -1000.0f); glTexCoord2f(2000 / FLOOR_TEXTURE_SIZE, (2000 + _guyPos) / FLOOR_TEXTURE_SIZE); glVertex3f(-1000.0f, 0.0f, 1000.0f); glTexCoord2f(0.0f, (2000 + _guyPos) / FLOOR_TEXTURE_SIZE); glVertex3f(1000.0f, 0.0f, 1000.0f); glTexCoord2f(0.0f, _guyPos / FLOOR_TEXTURE_SIZE); glVertex3f(1000.0f, 0.0f, -1000.0f); glEnd(); //... }

Now, we draw the floor. The floor will just be a quadrilateral that extends very far in each direction. To make the guy appear to move forward, we set the texture coordinates so that the floor tiles appear to move in the appropriate direction.
void update(int value) { _angle += 0.7f; if (_angle > 360) { _angle -= 360; } //Advance the animation if (_model != NULL) { MD2Model_advance(_model, 0.025f); }

54

//Update _guyPos _guyPos += 0.08f; while (_guyPos > FLOOR_TEXTURE_SIZE) { _guyPos -= FLOOR_TEXTURE_SIZE; } glutPostRedisplay(); glutTimerFunc(25, update, 0); }

Now we have our update function. We just increase the camera angle and the guy's position, and call advance on the MD2Model object. I figured out the rate at which to increase the _guyPos variable using trial and error. And with that code, we have created a 3D walking guy. Download the source code, compile the program, and run it. Exercises Change the advance method of the MD2Model class to allow a negative value for the parameter dt. Then, make the guy walk backwards by playing the animation in reverse and making the floor image scroll forwards instead of backwards. Rather than interpolating between frames, have the draw method only draw individual frames. The guy's motion will be really choppy. Interpolation allows us to have smooth, good-looking motion even with relatively few frames. Make it so that when the user presses the space bar (key == ' '), the guy freezes (but not the camera), and when the user presses the space bar again, the guy starts moving again. (Perhaps the guy freezes because he's so stunned at how awesome the animation looks.)

Lesson 10: Collision Detection


A common thing to do in video games, simulations, and other programs is to have something happen when two objects hit each other, such as having them bounce off each other or stop. For this, we need to use collision detection. The basic idea of collision detection is to locate which objects are intersecting at any given moment, so that we can handle the intersection in some way. Often, we want to do this in real-time, so our solution had better be fast. It turns out that collision detection is hard. For this reason, in a lot of demo or test versions of upcoming games, the collision detection is rather buggy. Furthermore, there's no one right answer for collision detection. It all depends on what program you are making. Important factors for designing collision detection include when and how collisions "usually" occur, what types of flaws are more or less noticeable to the user, and which collisions matter the most. In particular, if only collisions with the protagonist really matter, collision detection is a much different problem than if all collisions between a pair of game objects are relevant. We only cover a fraction of techniques used in collision detection. Frequently, collision detection revolves around tricks that group together closer objects. Often, it utilizes the fact that the scene doesn't change much between frames. You can get exact collision detection based on all of the 3D polygons of the objects, but usually it's better to approximate the shapes of objects as one or more simpler shapes such as boxes, cylinders, and spheres. Another common technique is to have a quick and dirty check that determines whether two objects might be colliding, which one performs before potentially wasting time on a longer check. For example, one could check whether the bounding spheres of two objects intersect before performing a more complicated check. 55

Of the many collision detection techniques, we show one in detail, to give you an idea of possible collision detection strategies. First, let's look at the problem we want to solve. Download, compile, and run the program. We have a box with the upper and lower walls shown; the rest of the walls are invisible. Every time you press the space bar, it will randomly add 20 balls to the box. They fall with gravity and bounce off of each other and the walls.

The basic idea of the program is to step by 10 milliseconds, updating the balls' positions and velocities, check for collisions and make all colliding balls bounce, and repeat. We're going to focus on the part where we check for collisions. To find all of the collisions, one thing we could do is check every pair of balls, and see if their distances are less than the sum of their radii. However, by the time we reached 300 balls, we'd have to check about 50,000 pairs of balls for potential collisions, even though there are usually very few collisions. Maybe there's a faster way. One thing we could try is to divide the cube in half along each dimension, into eight smaller cubes. Then, we could figure out in which cube(s) each ball is, and check every pair of balls in each smaller cube for collisions. Take a look at this diagram of the 2D equivalent of this technique:

If we were to check every pair of balls in the above picture for collisions, we would have to check 105 pairs of balls. If instead, we check each pair of balls in each of the four smaller squares, there are only 3 + 3 + 15 + 10 = 31 pairs to check. Note that two of the balls appear in two of the smaller squares. This will also occur in the 3D version of the problem, but it will be relatively uncommon. We've sped things up a little, but we can do even better. Our basic strategy to find potential 56

collisions in a cube was to divide the cube into eight smaller cubes, and then give some set of potential collisions within each smaller cube. For these potential collisions, we took every pair of balls in each smaller cube. But why stop there? We can divide the smaller cubes themselves into eight cubes, and take every pair of balls in each even smaller cube, so that we have even fewer pairs of balls to check. We can repeat this indefinitely, but after a while, it ceases to be helpful. For instance, if there are very few balls in a cube, say 3, then it's easier to just check all of the pairs of balls than to keep dividing up the cube. Plus, the more we divide up the cubes, the more frequently balls will appear in multiple cubes, which is bad, because this tends to produce duplicate pairs and false positives. So, let's use the following strategy: for a given cube, if there are a lot of balls in it, make eight smaller cubes, and let them take care of finding potential collisions. If there are not so many balls, just use every pair of balls as the set of potential collisions. This results in a tree structure; each cube is a node in the tree, and if it is divided into smaller cubes, these cubes are its eight children. It's called an "octree", with one "t" (the 2D equivalent is called a "quadtree"). Below is an example of the 2D version of the tree structure:

By further dividing the squares, we've reduced the number of pairs of balls to check even further, from 31 to 15. Once the length of the cubes approaches the radius of the balls, subdividing the cubes will make it very common for the balls to appear in many cubes, which is bad. For this reason, we'll limit the depth of the tree. That is, if we were going to subdivide a cube, but the cube is already at some depth x in the tree, then we don't subdivide it. Another thing: the scene doesn't change much from moment to moment. So, rather than constantly creating and destroying an octree, we'll create an octree at the beginning of the program, and whenever a ball moves or is created, we'll just change the octree. Now, not only do we need to divide up a cube when it has too many balls, but we have to un-divide a cube when it has too few, in order to ensure that each leaf-level cube has not too many, but not too few balls. So, whenever a cube goes above x balls, we'll divide it (unless the node is at the maximum allowable depth), and whenever a cube drops to below y balls, we'll un-divide it. We want x to be a little bigger than y, so that we don't have to keep dividing and un-dividing a given cube too frequently. Okay, let's take a look at some code. Be warned: the program is a good deal more complex than the programs in previous lessons. Before we look at the code for the octree, we'll look at the rest of the code. After the include statements, we define the randomFloat function, which returns a random float from 0 to less than 1.
//Stores information regarding a ball typedef struct Ball_t { Vec3f v; //Velocity

57

Vec3f pos; //Position float r; //Radius Vec3f color; } Ball;

We define our ball structure, which has the velocity, position, radius, and color of each ball. The velocity of the ball indicates how quickly it is moving in each direction. For example, a velocity of (3, -2, -5) means that it is moving 3 units per second in the positive x direction, 2 units per second in the downward direction, and 5 units per second in the negative z direction.
typedef enum Wall_t WALL_BOTTOM} Wall; {WALL_LEFT, WALL_RIGHT, WALL_FAR, WALL_NEAR, WALL_TOP,

The six walls are represented in an enumeration.


//Stores a pair of balls typedef struct BallPair_t { Ball* ball1; Ball* ball2; } BallPair; //Stores a ball and a wall typedef struct BallWallPair_t { struct Ball_t * ball; Wall wall; } BallWallPair;

We have structures to store ball-ball and ball-wall pairs, so that we can indicate potential collisions. Note that up until this point, I've been ignoring ball-wall collisions. This is because they take much less time to compute than ball-ball collisions, so it's not as important to optimize them. But don't worry, we'll get to them.
//Puts potential ball-ball collisions in potentialCollisions. It must return //all actual collisions, but it need not return only actual collisions. void potentialBallBallCollisions(BallPair_vector *potentialCollisions, Ball_vector *balls, Octree* octree) { //Fast method //octree->potentialBallBallCollisions(potentialCollisions); Octree_potentialBallBallCollisions(octree, potentialCollisions);

} //Puts potential ball-wall collisions in potentialCollisions. It must return //all actual collisions, but it need not return only actual collisions. void potentialBallWallCollisions(BallWallPair_vector *potentialCollisions, Ball_vector *balls, Octree* octree) { //Fast method // octree->potentialBallWallCollisions(potentialCollisions); Octree_potentialBallWallCollisions(octree, potentialCollisions, WALL_LEFT, 'x', 0); Octree_potentialBallWallCollisions(octree, potentialCollisions, WALL_RIGHT, 'x', 1); Octree_potentialBallWallCollisions(octree, potentialCollisions, WALL_BOTTOM, 'y', 0); Octree_potentialBallWallCollisions(octree, potentialCollisions, WALL_TOP, 'y', 1); Octree_potentialBallWallCollisions(octree, potentialCollisions, WALL_FAR, 'z', 0);

58

Octree_potentialBallWallCollisions(octree, potentialCollisions, WALL_NEAR, 'z', 1); }

In these function, we compute all possible ball-ball and ball-wall collisions, and add them to a custom dynamic vector (see structure BallWallPair_vector and BallPair_vector). They just ask the octree for the potential collisions. As I mentioned, we'll worry about how the octree works after we cover the basic mechanics of the program. Next, we have the moveBalls function, which moves all of the balls by their velocity times some float dt, in order to advance them by some small amount of time. Then, we have the applyGravity function, called every TIME_BETWEEN_UPDATES seconds. It applies gravity to the balls by decreasing the y coordinate of their velocities by GRAVITY * TIME_BETWEEN_UPDATES. That's similar to how gravity works in real life; it decreases an object's velocity in the y direction at a rate of 9.8 meters per second per second.
//Returns whether two balls are colliding int testBallBallCollision(Ball* b1, Ball* b2) { //Check whether the balls are close enough float r = b1->r + b2->r; Vec3f tmp; tmp.v[0] = b1->pos.v[0] - b2->pos.v[0]; tmp.v[1] = b1->pos.v[1] - b2->pos.v[1]; tmp.v[2] = b1->pos.v[2] - b2->pos.v[2]; // if ((b1->pos - b2->pos).magnitudeSquared() < r * r) { Vec3f netVelocity; Vec3f displacement; if ( Vec3f_magnitudeSquared(&tmp) < //Check whether the balls are netVelocity.v[0] = b1->v.v[0] netVelocity.v[1] = b1->v.v[1] netVelocity.v[2] = b1->v.v[2]

r * r) { moving toward each other - b2->v.v[0]; - b2->v.v[1]; - b2->v.v[2];

displacement.v[0] = b1->pos.v[0] - b2->pos.v[0]; displacement.v[1] = b1->pos.v[1] - b2->pos.v[1]; displacement.v[2] = b1->pos.v[2] - b2->pos.v[2]; // return netVelocity.dot(displacement) < 0; return Vec3f_dot(&netVelocity, &displacement) < 0; } else } return 0;

This function tests whether two balls are currently colliding. If ( Vec3f_magnitudeSquared(&tmp) < r * r is false, meaning they balls are farther away than the sum of their radii, then we know they're not colliding. Otherwise, we have to check whether the balls are moving towards each other or away from each other. If they're moving away from each other, then most likely they just collided, and they shouldn't "collide" again.
//Handles all ball-ball collisions void handleBallBallCollisions(Ball_vector *balls, Octree* octree) {

59

int i; BallPair_vector bps; BallPair_vector_init (&bps); potentialBallBallCollisions(&bps, balls, octree); BallPair *bp; Ball* b1; Ball* b2; Vec3f *displacement; Vec3f tmp; for(i = 0; i < bps.size; i++) { bp = bps.vec[i]; b1 = bp->ball1; b2 = bp->ball2; if (testBallBallCollision(b1, b2)) { //Make the balls reflect off of each other tmp.v[0]= b1->pos.v[0] - b2->pos.v[0]; tmp.v[1]= b1->pos.v[1] - b2->pos.v[1]; tmp.v[2]= b1->pos.v[2] - b2->pos.v[2]; displacement = Vec3f_normalize(&tmp); b1->v.v[0] >v,displacement ); b1->v.v[1] >v,displacement ); b1->v.v[2] >v,displacement ); b2->v.v[0] >v,displacement ); b2->v.v[1] >v,displacement ); b2->v.v[2] >v,displacement ); } } } -= -= -= 2 2 2 * * * displacement->v[0] displacement->v[1] displacement->v[2] * * * Vec3f_dot Vec3f_dot Vec3f_dot (&b1(&b1(&b1-

-= -= -=

2 2 2

* * *

displacement->v[0] displacement->v[1] displacement->v[2]

* * *

Vec3f_dot Vec3f_dot Vec3f_dot

(&b2(&b2(&b2-

handleBallBallCollisions makes all colliding balls bounce off of each other. First, we call potentialBallBallCollisions to find possible collisions. Then, we go through all of the potential collisions to find which ones are really collisions. For each one, we make the balls bounce off of each other, by reversing the velocity of each in the direction from the center of one ball to the other. The following picture illustrates how we compute the velocity of a ball after bouncing:

60

In the picture, d is the initial velocity of the ball. s is its projection onto the vector from the ball to the ball off which it's bouncing. d - 2s is the velocity of the ball after the bounce. To determine s, we find the direction from the second ball to the first applying tmp.v[0]= b1->pos.v[0] - b2->pos.v[0]; tmp.v[1]= b1->pos.v[1] - b2->pos.v[1]; tmp.v[2]= b1->pos.v[2] - b2->pos.v[2]; displacement = Vec3f_normalize(&tmp); Then, we take the dot product of the initial velocity and this direction, which gives s. Since the balls don't slow down when they bounce, the balls will keep bouncing around forever, allowing for days or even years of non-stop entertainment. Then, we have the testBallWallCollision function, which returns whether a particular ball is colliding with a given wall. Again, we have to check to make sure that the ball is moving toward the wall before we say that they're colliding.
//Handles all ball-wall collisions void handleBallWallCollisions(Ball_vector *balls, Octree* octree) { // vector<BallWallPair> bwps; int i; BallWallPair_vector bwps; BallWallPair_vector_init (&bwps); potentialBallWallCollisions(&bwps, balls, octree); BallWallPair *bwp; Ball* b; Wall w; Vec3f tmp; Vec3f *dir; for(i = 0; i < bwps.size; i++) { bwp = bwps.vec[i]; b = bwp->ball; w = bwp->wall; if (testBallWallCollision(b, w)) { //Make the ball reflect off of the wall tmp = wallDirection(w);

61

dir = Vec3f_normalize(&tmp); //b->v -= 2 * dir * b->v.dot(dir); b->v.v[0] -= 2 * dir->v[0] * Vec3f_dot (&b->v,dir ); b->v.v[1] -= 2 * dir->v[1] * Vec3f_dot (&b->v,dir ); b->v.v[2] -= 2 * dir->v[2] * Vec3f_dot (&b->v,dir );

} } }

Now, we have a function that makes all balls that are colliding with a wall bounce. Like in handleBallBallCollisions, we compute potential ball-wall collisions, go through them and find the actuall ball-wall collisions, and make the balls bounce. To bounce, we reverse the velocity of the ball in the direction perpendicular to the wall.
//Applies gravity and handles all collisions. Should be called every //TIME_BETWEEN_UPDATES seconds. void performUpdate(Ball_vector *balls, Octree* octree) { applyGravity(balls); handleBallBallCollisions(balls, octree); handleBallWallCollisions(balls, octree); }

Now, we lump together the applyGravity, handleBallBallCollisions, and handleBallWallCollisions into a performUpdate function, which is what we call every TIME_BETWEEN_UPDATES seconds. Next is a advance function, which takes care of calling moveBalls, and calling performUpdate every TIME_BETWEEN_UPDATES seconds.
vBall_vector _balls; //All of the balls in play float _angle = 0.0f; //The camera angle Octree* _octree; //An octree with all af the balls //The amount of time until performUpdate should be called float _timeUntilUpdate = 0; GLuint _textureId;

Here are all of our global variables. Global variables are normally bad, because to understand a global variable, you potentially have to keep the whole main.c file in your head at once. Global variables are easily abused by altering them in ways that may confuse or subtly affect other functions. There are better approaches than global variables, but we use them here because we don't want to distract from collision detection. Instead, we'll pretend they're not global, and that they can only be accessed in the "toplevel functions" initRendering, drawScene, handleKeypress, handleResize, and a function we'll see called cleanup. To make them stand out, we'll have them all start with underscores.
void handleKeypress(unsigned char key, int x, int y) { switch (key) { case 27: //Escape key cleanup(); exit(0);

When the user presses ESC, we call cleanup and exit the program.
color case ' ': //Add 20 balls with a random position, velocity, radius, and

62

for( i = 0; i < 20; i++) { ball = (Ball*) malloc ( sizeof (Ball)); Vec3f_Init(&ball->pos , 8 * randomFloat() - 4, 8 * randomFloat() - 4, 8 * randomFloat() - 4); Vec3f_Init(&ball->v, 8 * randomFloat() - 4, 8 * randomFloat() - 4, 8 * randomFloat() - 4); ball->r = 0.1f * randomFloat() + 0.1f; Vec3f_Init(&ball->color, 0.6f * randomFloat() + 0.2f, 0.6f * randomFloat() 0.2f, 0.2f); } } } 0.6f Ball_vector_push (&_balls, ball); Octree_add (_octree, ball); * randomFloat()

+ +

When the user presses space bar, we make 20 balls with random positions, velocities, radii, and colors, and add them to the octree and the _balls vector. If you look at drawScene, you'll see that we first draw the top and bottom of the box. Then, we draw the balls, using the following code:
//Draw the balls int i; Ball* ball; for(i = 0; i < _balls.size; i++) { ball = _balls.vec[i]; glPushMatrix(); glTranslatef(ball->pos.v[0], ball->pos.v[1], ball->pos.v[2]); glColor3f(ball->color.v[0], ball->color.v[1], ball->color.v[2]); glutSolidSphere(ball->r, 12, 12); //Draw a sphere glPopMatrix(); }

We have a new function here, glutSolidSphere, which draws a sphere. The first parameter is the radius of the sphere. The second and third indicate the number of polygons we'll use to draw the sphere will have; the bigger the numbers, the more polygons we use and the better the sphere will look.
//Called every TIMER_MS milliseconds void update(int value) { advance(&_balls, _octree, (float)TIMER_MS / 1000.0f, _timeUntilUpdate); _angle += (float)TIMER_MS / 100; if (_angle > 360) { _angle -= 360; } glutPostRedisplay(); glutTimerFunc(TIMER_MS, update, 0); }

Our update function just calls advance and increases the angle of rotation. That does it for the basic mechanics; now let's see how our octree works.
const int MAX_OCTREE_DEPTH = 6; const int MIN_BALLS_PER_OCTREE = 3;

63

const int MAX_BALLS_PER_OCTREE = 6;

These are the parameters of our octree. We want a maximum depth of 6. When the number of balls in a cube reaches 6, we want to divide it into smaller cubes. When it goes below 3, we want to undivide it.
typedef struct Octree_t { Vec3f corner1; //(minX, minY, minZ) Vec3f corner2; //(maxX, maxY, maxZ) Vec3f center;//((minX + maxX) / 2, (minY + maxY) / 2, (minZ + maxZ) / 2)

We start with the fields in our Octree structure. We have the corner1, which is the lower-leftfar corner of the cube, corner2, which is the upper-right-near corner, and center, which is the middle of the cube.
/* The children of this, if this has any. children[0][*][*] are the * children with x coordinates ranging from minX to centerX. * children[1][*][*] are the children with x coordinates ranging from * centerX to maxX. Similarly for the other two dimensions of the * children array. */ struct Octree_t *children[2][2][2];

Now, we have the children nodes of the octree, if there are any. The children would themselves be octrees. Read the comment above the field.
//Whether this has children int hasChildren; //The balls in this, if this doesn't have any children //set<Ball*> balls; Ball *balls [MAX_BALLS]; int i_balls; //The depth of this in the tree int depth; //The number of balls in this, including those stored in its children int numBalls;

These fields are pretty self-explanatory. The balls variable is a static array with the maximum size set to MAX_BALLS = 200. So we can add at most 200 balls in our structure.
//Adds a ball to or removes one from the children of this void Octree_fileBall(Octree *p, Ball* ball, Vec3f pos, int addBall) { int x, y, z; //Figure out in which child(ren) the ball belongs for(x = 0; x < 2; x++) { if (x == 0) { if (pos.v[0] - ball->r > p->center.v[0]) { continue; } } else if (pos.v[0] + ball->r < p->center.v[0]) { continue; } for( y = 0; y < 2; y++) { if (y == 0) {

64

} else if (pos.v[1] + ball->r < p->center.v[1]) { continue; }

if (pos.v[1] - ball->r > p->center.v[1]) { continue; }

for( z = 0; z < 2; z++) { if (z == 0) { if (pos.v[2] - ball->r > p->center.v[2]) { continue; } } else if (pos.v[2] + ball->r < p->center.v[2]) { continue; } //Add or remove the ball if (addBall) { Octree_add(p->children[x][y][z], ball); } else { Octree_remove(p->children[x][y][z], ball, pos); }

} } } }

The fileBall function finds out the children where a ball belongs, based on the position pos and either adds it to or removes it from those children, calling the add and remove methods that we'll see later. To make things easier, rather than check whether a given ball intersects each cube, we check whether the ball's bounding box intersects each cube. It's okay for a node to have extra balls like this.
//Creates children of this, and moves the balls in this to the children void Octree_haveChildren(Octree *p) { int x,y,z; Vec3f v1, v2; for(x = 0; x < 2; x++) { float minX; float maxX; if (x == 0) { minX = p->corner1.v[0]; maxX = p->center.v[0]; } else { minX = p->center.v[0]; maxX = p->corner2.v[0]; } for( y = 0; float float if (y y < 2; y++) { minY; maxY; == 0) { minY = p->corner1.v[1]; maxY = p->center.v[1];

} else {

65

minY = p->center.v[1]; maxY = p->corner2.v[1]; } for( z = 0; float float if (z z < 2; z++) { minZ; maxZ; == 0) { minZ = p->corner1.v[2]; maxZ = p->center.v[2];

} else { minZ = p->center.v[2]; maxZ = p->corner2.v[2]; }

(Octree));

p->children[x][y][z]

(Octree*)

malloc

(sizeof

Vec3f_Init (&v1,minX, minY, minZ ); Vec3f_Init (&v2,maxX, maxY, maxZ ); Octree_init (p->children[x][y][z], v1, 1); } } }

v2,

p->depth

//Remove all balls from "balls" and add them to the new children for (x = 0; x < p->i_balls; x++) { Octree_fileBall(p, p->balls[x], p->balls[x]->pos, 1); } p->i_balls = 0; p->hasChildren = 1;

The haveChildren function is what divides a cube into eight smaller cubes, whenever we need to do that. To make each child, we call p->children[x][y][z] = (Octree*) malloc (sizeof (Octree));. Next, we have the collectBalls method, which finds all of the balls in a node or one of its children. We'll need this for when we un-divide a cube.
//Destroys the children of this, and moves all balls in its descendants //to the "balls" set void Octree_destroyChildren(Octree *p) { int x,y,z; //Move all balls in descendants of this to the "balls" set Octree_collectBalls(p, p); for( x = 0; x < 2; x++) { for( y = 0; y < 2; y++) { for( z = 0; z < 2; z++) { Octree_clean (p->children[x][y][z]);

66

free (p->children[x][y][z]); p->children[x][y][z] = NULL; } } } p->hasChildren = 0; }

This is where we un-divide a cube.


//Removes the specified ball at the indicated position void Octree_remove(Octree *p, Ball* ball, Vec3f pos) { p->numBalls--; if (p->hasChildren && p->numBalls < MIN_BALLS_PER_OCTREE) { Octree_destroyChildren(p); } if (p->hasChildren) { Octree_fileBall(p, ball, pos, 0); } else { // balls.erase(ball); if (p->i_balls > 0) p->i_balls-- ; } }

This removes a ball from the octree. Before we move on, we should know how we identify potential ball-wall collisions. To find potential collisions with the left wall, we just find the nodes that are at the extreme left, and return all of those balls. We use the same idea for the other five walls.
/* Helper fuction for potentialBallWallCollisions(vector). Adds * potential ball-wall collisions to cs, where w is the type of wall, * coord is the relevant coordinate of the wall ('x', 'y', or 'z'), and * dir is 0 if the wall is in the negative direction and 1 if it is in * the positive direction. Assumes that this is in the extreme * direction of the coordinate, e.g. if w is WALL_TOP, the function * assumes that this is in the far upward direction. */ void Octree_potentialBallWallCollisions(Octree *p, BallWallPair_vector *cs, Wall w, char coord, int dir) { int dir2, dir3; if (p->hasChildren) { //Recursively call potentialBallWallCollisions on the correct //half of the children (e.g. if w is WALL_TOP, call it on //children above centerY) Octree *child; for( dir2 = 0; dir2 < 2; dir2++) { for( dir3 = 0; dir3 < 2; dir3++) { switch (coord) { case 'x': child = p->children[dir][dir2][dir3]; break; case 'y': child = p->children[dir2][dir][dir3];

67

break; case 'z': child = p->children[dir2][dir3][dir]; break; } dir); } } Octree_potentialBallWallCollisions(child, cs, w, coord,

} else { BallWallPair *bwp; int i; //Add (ball, w) for all balls in this for (i = 0; i < p->i_balls; i++) { bwp = (BallWallPair*) malloc (sizeof (BallWallPair)); bwp->ball = p->balls[i]; bwp->wall = w; BallWallPair_vector_push (cs, bwp); }

} }

This is a helper function for computing potential ball-wall collisions. It's explained in the comments.
//Adds a ball to this void Octree_add(Octree *p, Ball* ball) { p->numBalls++; if (!p->hasChildren && p->depth < MAX_OCTREE_DEPTH && p->numBalls > MAX_BALLS_PER_OCTREE) { Octree_haveChildren(p); } if (p->hasChildren) { Octree_fileBall(p, ball, ball->pos, 1); } else { //balls.insert(ball); p->balls[p->i_balls] = ball; p->i_balls++; }

The add function adds a new ball to the octree.


//Removes a ball from this void remove_ball(Octree *p, Ball* ball) { Octree_remove(p, ball, ball->pos); }

The function for removing a ball just calls our other remove function, using the ball's current position.
//Changes the position of a ball in this from oldPos to ball->pos

68

void Octree_ballMoved(Octree *p, Ball* ball, Vec3f oldPos) { // remove(ball, oldPos); Octree_remove(p, ball, oldPos); //add(ball); Octree_add(p, ball); }

This function is called whenever the ball moves from a position oldPos to ball->pos. To make our lives easier, we just remove the ball and then add it again. We could go through the trouble of figuring out exactly in which cubes the ball is now, but wasn't, and in which cubes the ball was, but isn't any more. But I bet this wouldn't speed things up too much anyway.
/Adds potential ball-ball collisions to the specified et void Octree_potentialBallBallCollisions(Octree *p, BallPair_vector *collisions) { int x,y,z; if (p->hasChildren) { for( x = 0; x < 2; x++) { for( y = 0; y < 2; y++) { for( z = 0; z < 2; z++) { Octree_potentialBallBallCollisions(p->children[x] [y][z], collisions); } } } } else { //Add all pairs (ball1, ball2) from balls BallPair *bp; int i,j; for (i = 0; i < p->i_balls; i++) { for (j = 0; j < p->i_balls; j++) { //This test makes sure that we only add each pair once if (p->balls[i] < p->balls[j]) { bp = (BallPair*) malloc (sizeof (BallPair)); bp->ball1 = p->balls[i]; bp->ball2 = p->balls[j]; BallPair_vector_push (collisions,bp); } } } } }

Here's the meat of the octree. In this method, we compute all potential ball-ball collisions and put them in the collisions vector. If there are children, we just ask them for their potential ballball collisions; otherwise, we just take every pair of balls.
void potentialBallWallCollisions(BallWallPair_vector *potentialCollisions, Ball_vector *balls, Octree* octree) { //Fast method // octree->potentialBallWallCollisions(potentialCollisions); Octree_potentialBallWallCollisions(octree, potentialCollisions, WALL_LEFT, 'x', 0); Octree_potentialBallWallCollisions(octree, potentialCollisions,

69

WALL_RIGHT, 'x', 1); Octree_potentialBallWallCollisions(octree, potentialCollisions, WALL_BOTTOM, 'y', 0); Octree_potentialBallWallCollisions(octree, potentialCollisions, WALL_TOP, 'y', 1); Octree_potentialBallWallCollisions(octree, potentialCollisions, WALL_FAR, 'z', 0); Octree_potentialBallWallCollisions(octree, potentialCollisions, WALL_NEAR, 'z', 1);

In this function, we compute all potential ball-wall collisions by calling our helper function six times, once for each wall. And that's how our octree works. Now, let's make sure we didn't do all that work for nothing, that the octree did speed things up. Run the program, and keep pressing space bar to see how many balls you can add until things start to slow down. If your computer's too fast, you might want to slow down the program by decreasing TIME_BETWEEN_UPDATES. (If your computer's too slow, you could increase TIME_BETWEEN_UPDATES, but then it'll look pretty cruddy.) Exercise Change the spheres into axis-aligned cubes. Make sure to change the angles at which they bounce off of each other. Change the octree so that each octree node without any children has a depth of at least 4. In other words, make sure that nodes of depth 1 to 3 all have children. Make a version of the program where all the spheres lie in the same plane and move in the x and y dimensions. Use a two-dimensional "quadtree" rather than a three-dimensional "octree".

Part 3: Special Effects


Lesson 12: Alpha Blending
In our program, we will be drawing a cube with all of the faces will be transparent. Our program will look like this:

A transparent face has a particular amount of opacity, known as its alpha value. Our faces have an 70

alpha value of 0.6, indicating that they are 60% opaque and 40% transparent. The alpha value is actually considered to be a component of the color of each face. The magenta face, for example, has a color of (1, 0, 1, 0.6). In order to draw the cube, we draw the faces from back to front. Whenever we draw a face, OpenGL will go through all of the pixels on the face and average them with the pixels that are already there. So when we draw the face that is farthest back, for each pixel, we'll take 60% of that pixel and add 40% of the pixel that's already there, which happens to be black. In other words, the red component of the resulting pixel, for example, will be 0.4 times the red component of the black pixel already there plus 0.6 times the red component of the pixel we're drawing on top. In earlier programs, the order in which we drew the faces didn't matter. But in this case, it's important to draw them from back to front. This simulates real-world transparency, as light must travel through objects in order from back to front before reaching your eye. There's actually a clever technique called backface culling that allows us to make transparent objects without carefully sorting the faces. Let's take a look at the code that makes it all happen.
const float PI = 3.1415926535f; const float BOX_SIZE = 7.0f; //The length of each side of the cube const float ALPHA = 0.6f; //The opacity of each face of the cube

We start with a few constants: pi, the length of each side of the cube, and the alpha value for each face of the cube.
//Three perpendicular vectors for a face of the cube. out indicates the //direction that the face is from the center of the cube. typedef struct Face_t { Vec3f up; Vec3f right; Vec3f out; } Face;

Here, we have a structure that we'll use to store where each face is. We're going to specify the coordinates of each vertex of each face directly rather than using glRotatef to figure them out for us. This is because we need to know the positions of all of the faces, so that we can sort them from back to front. For each face, we store three perpendicular vectors with magnitude 1. We have the vector out, which points outward from the face. It is the same as the normal vector. We can use it to figure out the center of the face; the center is located at out * BOX_SIZE / 2. up and right are vectors indicating two sides of the face.
//Represents a cube. typedef struct Cube_t { Face top; Face bottom; Face left; Face right; Face front; Face back; } Cube;

This structure stores all of the faces of the cube.


//Rotates the vector by the indicated number of degrees about the specified axis Vec3f rotate(Vec3f v, Vec3f axis, float degrees) {

71

Vec3f res; Vec3f* tmp; Vec3f* cross; tmp = Vec3f_normalize (&axis); float radians = degrees * PI / 180; float s = sin(radians); float c = cos(radians); cross = Vec3f_cross (&v, &axis); res.v[0] = v.v[0] * c + tmp->v[0] * Vec3f_dot (tmp, &v) * (1 - c) + cross>v[0] * s; res.v[1] = v.v[1] * c + tmp->v[1] * Vec3f_dot (tmp, &v) * (1 - c) + cross>v[1] * s; res.v[2] = v.v[2] * c + tmp->v[2] * Vec3f_dot (tmp, &v) * (1 - c) + cross>v[2] * s; return res; }

This function rotates a vector a certain number of degrees about a particular axis. The formula has been taken online at MathWorld.
//Rotates the face by the indicated number of degrees about the specified axis void Face_rotate(Face *face, Vec3f axis, float degrees) { face->up = rotate(face->up, axis, degrees); face->right = rotate(face->right, axis, degrees); face->out = rotate(face->out, axis, degrees); }

This rotate function rotates a particular face. To do that, we just have to rotate the face's up, right, and out vectors.
//Rotates the cube by the indicated number of degrees about the specified axis void Cube_rotate(Cube *cube, Vec3f axis, float degrees) { Face_rotate(&(cube->top), axis, degrees); Face_rotate(&(cube->bottom), axis, degrees); Face_rotate(&(cube->left), axis, degrees); Face_rotate(&(cube->right), axis, degrees); Face_rotate(&(cube->front), axis, degrees); Face_rotate(&(cube->back), axis, degrees); }

This rotate function rotates a cube by rotating each of its faces.


//Initializes the up, right, and out vectors for the six faces of the cube. void initCube(Cube *cube) { Vec3f_Init(&cube->top.up, 0, 0, -1); Vec3f_Init(&cube->top.right,1, 0, 0); Vec3f_Init(&cube->top.out,0, 1, 0); Vec3f_Init(&cube->bottom.up, 0, 0, 1); Vec3f_Init(&cube->bottom.right,1, 0, 0); Vec3f_Init(&cube->bottom.out, 0, -1, 0); Vec3f_Init(&cube->left.up, 0, 0, -1); Vec3f_Init(&cube->left.right, 0, 1, 0); Vec3f_Init(&cube->left.out, -1, 0, 0); Vec3f_Init(&cube->right.up, 0, -1, 0); Vec3f_Init(&cube->right.right, 0, 0, 1); Vec3f_Init(&cube->right.out, 1, 0, 0);

72

Vec3f_Init(&cube->front.up, 0, 1, 0); Vec3f_Init(&cube->front.right,1, 0, 0); Vec3f_Init(&cube->front.out, 0, 0, 1); Vec3f_Init(&cube->back.up, 1, 0, 0); Vec3f_Init(&cube->back.right,0, 1, 0); Vec3f_Init(&cube->back.out, 0, 0, -1); }

The initCube function initializes the up, right, and out vectors of each of the faces of a cube.
//Stores the four vertices of the face in the array "vs". void faceVertices(Face *face, Vec3f* vs) { vs[0].v[0] = BOX_SIZE / 2 * (face->out.v[0] - face->right.v[0] - face>up.v[0]); vs[0].v[1] = BOX_SIZE / 2 * (face->out.v[1] - face->right.v[1] - face>up.v[1]); vs[0].v[2] = BOX_SIZE / 2 * (face->out.v[2] - face->right.v[2] - face>up.v[2]); vs[1].v[0] = BOX_SIZE / 2 * (face->out.v[0] - face->right.v[0] + face>up.v[0]); vs[1].v[1] = BOX_SIZE / 2 * (face->out.v[1] - face->right.v[1] + face>up.v[1]); vs[1].v[2] = BOX_SIZE / 2 * (face->out.v[2] - face->right.v[2] + face>up.v[2]); vs[2].v[0] = BOX_SIZE / 2 * (face->out.v[0] + face->right.v[0] + face>up.v[0]); vs[2].v[1] = BOX_SIZE / 2 * (face->out.v[1] + face->right.v[1] + face>up.v[1]); vs[2].v[2] = BOX_SIZE / 2 * (face->out.v[2] + face->right.v[2] + face>up.v[2]); vs[3].v[0] = BOX_SIZE / 2 * (face->out.v[0] + face->right.v[0] - face>up.v[0]); vs[3].v[1] = BOX_SIZE / 2 * (face->out.v[1] + face->right.v[1] - face>up.v[1]); vs[3].v[2] = BOX_SIZE / 2 * (face->out.v[2] + face->right.v[2] - face>up.v[2]); }

The faceVertices function figures out the four vertices of the quadrilateral for a face and stores them in a vs array.
void drawTopFace(Face *face) { Vec3f vs[4]; faceVertices(face, vs); glDisable(GL_TEXTURE_2D); glBegin(GL_QUADS); glColor4f(1.0f, 1.0f, 0.0f, ALPHA); glNormal3f(face->out.v[0], face->out.v[1], face->out.v[2]); glVertex3f(vs[0].v[0], vs[0].v[1], vs[0].v[2]); glVertex3f(vs[1].v[0], vs[1].v[1], vs[1].v[2]); glVertex3f(vs[2].v[0], vs[2].v[1], vs[2].v[2]); glVertex3f(vs[3].v[0], vs[3].v[1], vs[3].v[2]);

73

glEnd();

The drawTopFace function takes care of drawing the top face of the cube. It first calls faceVertices to figure out where to draw the face. Then, it draws the face. The call to glColor4f is new. It specifies the color of the face using red, green, blue, and alpha components, which we'll want to do to make the object look transparent.
void drawBottomFace(Face *face) { //... } void drawLeftFace(Face *face) { //... } void drawRightFace(Face *face) { //... } void drawFrontFace(Face *face) { //... } void drawBackFace(Face *face) { //... glColor4f(1.0f, 1.0f, 1.0f, ALPHA); //... }

We also have separate functions for drawing the bottom, left, right, front, and back faces. We need separate functions in order to have them be different colors, or use color blending, or use textures. The back face is a textured face. You'll notice that calling glColor4f allows us to make even textured faces transparent.
//Draws the indicated face on the specified cube. void drawFace(Face* face, Cube *cube, GLuint textureId) { if (face == &(cube->top)) { drawTopFace(&(cube->top)); } else if (face == &(cube->bottom)) { drawBottomFace(&(cube->bottom)); } else if (face == &(cube->left)) { drawLeftFace(&(cube->left)); } else if (face == &(cube->right)) { drawRightFace(&(cube->right)); } else if (face == &(cube->front)) { drawFrontFace(&(cube->front), textureId); } else { drawBackFace(&(cube->back), textureId); } }

The drawFace function takes a face, figures out whether it is the top, bottom, left, right, front, or 74

back face, and calls the appropriate draw function.


void initRendering() { //... glEnable(GL_BLEND); //Enable alpha blending glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); //Set the blend function //... }

We have a couple of new calls in initRendering. First, we call glEnable(GL_BLEND) to enable alpha blending. We mentioned that when we draw a pixel for a particular face, in this program, we take 60% of that pixel and 40% of the pixel already there. Well, actually, you can use all kinds of weird functions to figure out the new pixel value. But for normal transparency, you'll want to use the function we mentioned, so we'll just call glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA).
void drawScene() { //... Face *faces[6]; faces[0] faces[1] faces[2] faces[3] faces[4] faces[5] = = = = = = &(_cube.top); &(_cube.bottom); &(_cube.left); &(_cube.right); &(_cube.front); &(_cube.back);

int i; for(i = 0; i < faces.size(); i++) { drawFace(faces[i], &_cube, _textureId); } //...

We draw the faces of the cube.


void update(int value) { Vec3f tmp; Vec3f_Init (&tmp,1, 1, 0); Cube_rotate(&_cube, tmp, 1); glutPostRedisplay(); glutTimerFunc(25, update, 0);

The update function rotates the cube.


int main(int argc, char** argv) { //... initCube(&_cube); //... }

In our main function, we have a call to our initCube function. That's our program. It shows how to make transparent-looking objects in OpenGL. Download the source code, compile the program, and run it. 75

Exercise Rather than sorting the faces from back to front, draw them in the following order: top, bottom, left, right, front, back. Notice that the scene looks messed up; the transparency doesn't look right. Give the top and bottom faces a different alpha value than the other faces. Make the 'a' key toggle whether alpha blending is on.

76

S-ar putea să vă placă și