Due Wednesday 4/10 at 11:59 pm. You must work individually.
NOTE: For maximum compatibility with student computers/drivers, we are only going to use OpenGL 2.1 and GLSL 1.2.
(The graininess is due to the video compression – better quality video here.)
There is no base code provided for this assignment. Please start with your previous lab/assignment code. Please contact the instructor ASAP if you need help completing your previous assignment.
Start with your A3 or A4 code.
Add at least 100 objects in the scene.
Add at least 10 lights, each with a color.
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
in the init function.glm::vec3
s to the fragment shader. You can use the following syntax: glUniform3fv(uniformID, count, value_ptr(array[0]));
, where uniformID
is the ID pointing to the uniform variable in the shader (e.g., prog->getUniform("foo"))
), count
is the array length, and array
is the array of glm::vec3
s. In the shader, the array should be declared as uniform vec3 foo[10]
, assuming that count=10
.Now add attenuation to the lights. We’re going to use the standard quadratic attenuation model in OpenGL: \[ A = \frac{1}{A_0 + A_1 r + A_2 r^2}, \] where \(r\) is the distance between the fragment and the light, and \(A_0\), \(A_1\), \(A_2\) are the constant, linear, and quadratic attenuation factors. For this part of the assignment, we want the light to fall off to \(10\)% of its strength at \(r=3\) and to \(1\)% at \(r=10\), which gives us \(A_0 = 1.0\), \(A_1 = 0.0429\), and \(A_2 = 0.9857\). The color at the fragment should be scaled by this attenuation value.
// emissive color of the object (see below)
vec3 fragColor = ke; for(...) { // for each light
float diffuse = ...;
float specular = ...;
vec3 color = lightColor * (kd * diffuse + ks * specular);float attenuation = 1.0 / (A0 + ...);
fragColor += color * attenuation; }
Each light should be displayed as a sphere, using the same fragment shader as everything else. We’ll use the “emissive” color, \(k_e\), for this, which is the color that the light is emitting. This value should be set to the light color when rendering the sphere for the light and \((0,0,0)\) for everything else. Putting everything together, the fragment color for all objects and lights should be computed as follows: \[ \vec{k}_e + \sum_i \vec{L}_i \odot \frac{\vec{k}_d \max(0, \hat{n} \cdot \hat{l}_i) + \vec{k}_s \max(0, (\hat{n} \cdot \hat{h}_i))^s}{A_0 + A_1 r_i + A_2 r_i^2}, \] where \(i\) is the ith light. The \(\odot\) notation indicates that the multiplication should be done component-wise, for R, G, and B. This equation allows us to use a single shader to render everything in the scene – the lights will be colored using \(k_e\), and other objects will be colored with attenuated Blinn-Phong. To summarize:
Rather than changing the scale of the bunny over time as in A4, rotate it around its vertical axis. As in A4, the overall scale of the bunny should still be randomized, and the bunny should touch the floor but not intersect it.
In this image (and the following), I am using only one light to better illustrate the motion.
Rather than changing the scale of the teapot over time as in A4, shear it so that it sways from side to side. The teapot should look like it is glued to the floor. As in A4, the overall scale of the teapot should still be randomized.
Add some bouncing spheres, following the steps outlined in Lab 10. The sphere should have a randomized radius, and it should touch the floor but not intersect it. When the sphere is moving up or down, its scale in X and Z should be made smaller to achieve “squash and stretch.” The geometry of the sphere should be created and stored in memory just once in the init()
function. To display multiple spheres in the scene, use different transformation matrices passed in as uniform variables.
Add some surfaces of revolution, following the steps outlined in Lab 11. First, implement a static surface on the CPU and then move the computation over to the GPU to allow a dynamic surface. Like the sphere, the vertex attributes of the surface of revolution should be created just once in the init()
function. To display multiple surfaces of revolution in the scene, use different transformation matrices passed in as uniform variables. Just like the other objects in the scene, the surface-of-revolution objects should just touch the ground.
We are now going to implement deferred rendering. Since this step requires render to texture and multiple render targets, it may help to complete Lab 12 first. Deferred rendering will require substantial overhauling of your code base, so you should make sure to apply source control so that you can easily get back to your old code if needed.
In deferred rendering, we use two passes. In the first pass, we render to multiple render targets to create textures that hold all the information needed to compute the color of each fragment. In the second pass, we render a view-aligned quad with the textures from the first pass, and then do the actual lighting computation in the fragment shader.
The four images below show the four textures we need to generate in the first pass. The size of these textures should be the same as the onscreen framebuffer size, which can be obtained with glfwGetFramebufferSize(...)
. (The default size is 640 x 480
; later, support for resizing the window will be added.)
The first image is the camera-space position of all of the fragments. In this visualization, the position \((x,y,z)\) is coded with \((r,g,b)\). Since RGB needs to be between \(0\) and \(1\), the visualization shows a black area in the lower left, corresponding to the region where the camera-space positions are all negative. Also, since the camera-space Z coordinate of all of these fragments is negative, there is no blue component in the color output of any of the fragments.
The second image is the camera-space normal of all of the fragments. As before, the normal’s \((x,y,z)\) is coded with \((r,g,b)\). Fragments whose normals are pointing to the right in camera-space are colored red, those whose normals are pointing up are colored green, and those whose normals are pointing toward the camera are colored blue.
The third image is the emissive color of all the fragments. In this image, I have 200 randomly colored lights, but in your code, you may only have a small number of lights.
The fourth image is the diffuse color of all the fragments.
These four textures must be generated as the output of the first pass. To do so, first, change the texture format in your C++ code to use 16-bit floats: GL_RGB16F
instead of GL_RGBA8
, and GL_RGB
instead of GL_RGBA
.
Your L12 code will have the following lines:
1, &texture);
glGenTextures(
glBindTexture(GL_TEXTURE_2D, texture);0, GL_RGBA8, width, height, 0, GL_RGBA, GL_FLOAT, NULL);
glTexImage2D(GL_TEXTURE_2D, ...
In A5, replace the glTexImage2D
line above with the following:
0, GL_RGB16F, width, height, 0, GL_RGB, GL_FLOAT, NULL); glTexImage2D(GL_TEXTURE_2D,
In this assignment, there are four textures, so the line above must be used four times. The fragment shader of the first pass can now write floating point values to the four textures:
#version 120
// in camera space
varying vec3 vPos; // in camera space
varying vec3 vNor;
uniform vec3 ke;
uniform vec3 kd;
void main()
{0].xyz = vPos;
gl_FragData[1].xyz = vNor;
gl_FragData[2].xyz = ke;
gl_FragData[3].xyz = kd;
gl_FragData[ }
The vertex shaders for the first pass depend on what is being drawn. Bunny, teapot, and sphere should be drawn with a simple vertex shader that transforms the position and normal into camera space (vPos
and vNor
in the fragment shader above). Surface of revolution will require another vertex shader.
In the second pass, we draw a view-aligned quad that completely fills the screen. In this stage of the assignment, we can simply draw a unit square somewhere close enough to the camera so that it ends up covering the whole screen. The vertex shader for the second pass is very simple:
#version 120
uniform mat4 P;
uniform mat4 MV;
attribute vec4 aPos;
void main()
{
gl_Position = P * (MV * aPos); }
This vertex shader simply transforms the vertex position from model space to clip space. The fragment shader will use the textures created in the first pass to compute the final fragment colors that end up on the screen. Rather than using the texture coordinates of the quad, we can compute them in the fragment shader using the keyword gl_FragCoord
, which stores the window relative coordinate values for the fragment. Dividing this by the window size gives the correct texture coordinates, which are \((0,0)\) at the lower left corner and \((1,1)\) at the upper right corner. Using these texture coordinates, read from the four textures and then calculate the color of the fragment. Additionally, you need to pass in the light information to this fragment shader as uniform variables (e.g., light positions and colors).
#version 120
uniform sampler2D posTexture;
uniform sampler2D norTexture;
uniform sampler2D keTexture;
uniform sampler2D kdTexture;
uniform vec2 windowSize;
// more uniforms for lighting
...
void main()
{
vec2 tex;
tex.x = gl_FragCoord.x/windowSize.x;
tex.y = gl_FragCoord.y/windowSize.y;
// Fetch shading data
vec3 pos = texture2D(posTexture, tex).rgb;
vec3 nor = texture2D(norTexture, tex).rgb;
vec3 ke = texture2D(keTexture, tex).rgb;
vec3 kd = texture2D(kdTexture, tex).rgb;
// Calculate lighting here
...
gl_FragColor = ... }
For debugging, consider these substeps.
In the fragment shader for the second pass, simply color the quad red (gl_FragColor.rgb = vec3(1.0, 0.0, 0.0);
). This should give you a fully red screen.
Color the fragment using the computed texture coordinates (gl_FragColor.rg = tex;
). The screen should look red/green.
Color the fragment using each texture (e.g., gl_FragColor.rgb = pos;
). You should see the 4 images at the beginning of this task.
Please set up the code so that the grader can easily produce the 4 images at the top of this section. To get full points, the final output, as well as these 4 images must be correct. Please put in your README file how to produce these images (e.g., “Uncomment line XX in some shader file.”).
For debugging tips, see this subsection from A2.
Now that we have deferred rendering, it is easy to apply screen-space effects. For this task, implement blurring. Rather than simply calling GLSL’s builtin texture2D(...)
function, call the following sampleTextureArea(...)
instead to obtain the position, normal, diffuse color, and emissive color at each fragment.
Simply copy-and-paste this GLSL sampling code and call it from your fragment shader. Use the b
key to toggle between standard output and blurred output.
vec2 poissonDisk[] = vec2[](0.220147, 0.976896),
vec2(-0.735514, 0.693436),
vec2(-0.200476, 0.310353),
vec2(-0.180822, 0.454146),
vec2( 0.292754, 0.937414),
vec2( 0.564255, 0.207879),
vec2( 0.178031, 0.024583),
vec2( 0.613912,-0.205936),
vec2( 0.385540,-0.070092),
vec2(-0.962838, 0.378319),
vec2( 0.886362, 0.032122),
vec2(-0.466531,-0.741458),
vec2(-0.006773,-0.574796),
vec2( 0.739828,-0.410584),
vec2(-0.590785,-0.697557),
vec2( 0.081436,-0.963262),
vec2(-1.000000,-0.100160),
vec2( 0.622430, 0.680868)
vec2(
);
vec3 sampleTextureArea(sampler2D texture, vec2 tex0)
{const int N = 18; // [1-18]
const float blur = 0.005;
0.0, 0.0, 0.0);
vec3 val = vec3(for(int i = 0; i < N; i++) {
val += texture2D(texture, tex0.xy + poissonDisk[i]*blur).rgb;
}
val /= N;return val;
}
Add support for window resizing for deferred rendering. Use the framebuffer size callback in GLFW. Note that the code will slow down a lot if the window size is increased, since there are many more fragments to be processed.
Total: 100 plus 5 bonus points
Failing to follow these points may decrease your “general execution” score. On Linux/Mac, make sure that your code compiles and runs by typing:
> mkdir build
> cd build
> cmake ..
> make
> ./A5 ../resources
If you’re on Windows, make sure that you can build your code using the same procedure as in Lab 0.
For this assignment, there should be only one argument. You can hard code all your input files (e.g., obj files) in the resources directory.
README.txt
or README.md
) that includes:
src/
, resources/
, CMakeLists.txt
, and your readme file. The resources folder should contain the obj files and the glsl files.(*.~)
(*.o)
(.vs)
(.git)
UIN.zip
(e.g., 12345678.zip
).UIN/
(e.g. 12345678/
).src/
, CMakeLists.txt
, etc..zip
format (not .gz
, .7z
, .rar
, etc.).