CS-341 Computer Graphics 2025

Project Framework Documentation

1. Introduction

This is the documentation of the starting framework for the final project of CS-341 Computer Graphics at EPFL, 2025 edition. The purpose of this document is to guide you through the framework structure and its main functionalities.

The framework is implemented in JavaScript and GLSL, and uses the regl library for WebGL rendering.

Run a local server and open index.html in your browser to visualize the demo scene displayed below.

Demo scene rendered using the framework. The scene contains a skysphere, multiple light sources,
a procedurally generated terrain, a set of random trees that evolve with time, and an interface to control the scene parameters.

About This Document

Sections 2. The Pipeline, 3. Additional Components, and 4. Overview of the Source Code provide a high-level summary of the framework structure and its main components. Section 5. Tutorial contains a step-by-step, hands-on guide about some basic actions you will need to perform repeatedly when working on the project.

We recommend you start by reading Section 2 to get a general understanding of how the pipeline is organized. While doing so, it can be useful to keep the source code on the side, and browse through files to check the syntax and the implementation details of the main functions. Section 3 describes some abstractions used in the pipeline – like scenes, cameras, and materials – as well as some helper classes in charge of managing and generating resources and assets, like meshes and textures. While getting familiar with the framework, you can simply skim over Section 3, and pin topics for later reference. Check Section 4 if you need a brief of what each file contains.

After you got an idea of the code structure, Section 5 presents a practical tutorial that will guide you through a few basic actions you will perform multiple times while implementing new features. The tutorial shows, for example, how to add new scenes and shaders, how to load custom meshes, how to visualize framebuffers, and how to introduce parameters that can be interactively modified from the UI.

About the Framework

This code base was developed using the same principles of the GPU pipeline assignments, i.e. homework 2 to 6. You should already be familiar with JS, regl, and GLSL syntax. If you need a refresher on the basics of these languages, please refer to the homework handouts.

The design principles that guided us in assembling this framework were, in order of importance: simplicity, completeness, performance.

  • Simplicity: The code we provide is meant to be read. This documentation provides a brief, self-contained overview; however, the functional principles should be clear directly from the implementation. Moreover, we wanted the code to be easily editable. Read the source, use the provided classes as templates, and do not be afraid of experimenting, introducing new abstractions, modifying existing functions.
  • Completeness: In homework assignments, different features were implemented in different pipelines, for example shadow mapping in GL3 and procedural terrain generation in PG1. This framework will give you a head start, saving you some time from implementing an infrastructure that brings everything together. If this is true in the long term, you might need some effort in the first couple of weeks to get familiar with all the framework components. Do not get discouraged if things seem overwhelming at first: this effort should pay off later on.
  • Performance: We strive to avoid unnecessary computations to guarantee real-time performance even for moderately complex scenes. This, however, was not our main goal. If you plan to achieve particularly complex rendering effects, or have scenes with hundreds of light sources or thousands of objects, you might need to either have access to a very performant machine, or implement suitable optimized algorithms, or restructure part of the given code base.

There can be bugs. If you spot one, please let us and your classmates know, either during the exercise session, or by asking questions on Ed Discussion. We value interactions among all students. Sharing code across groups is not allowed. However, sharing knowledge is strongly encouraged. We would appreciate you informing your classmates and the teaching team that you found an original and effective solution for integrating a feature in your code, e.g. by posting a tutorial on Ed, or by linking to useful external sources.

We thank Vicky Chappuis, CG student in 2024, for her help in refactoring the code, and for the introduction of abstractions that eased integration of different features from the homework code bases into a single framework.

2. The Pipeline

This section gives an overview of the pipeline implementation, top to bottom, from the entry point main.js down to GLSL shaders. For all regl-specific function, the regl API documentation is the reference resource.

The pipeline used to render the demo terrain scene.
Specific GLSL shaders are called by the corresponding ShaderRenderer to produce intermediate results, stored in dedicated buffers.
A final render pass mixes the lighted scene with the computed shadows.

The Entry Point: main.js

Whenever you open index.html in the web browser, main.js is the script that gets executed. The following actions are preformed sequentially by the main() function:

  1. Canvas Setup
  • regl defines their own canvas, i.e. the object on which the final image will be displayed in the browser.
  • JavaScript listener functions are defined to handle window resizing, ensuring the displayed result adapts accordingly.
  1. UI Setup
  • Interactive buttons and sliders can be added here to modify the scene in real-time.
  • The UI components defined here in ui_global_params are shared across all scenes.
  1. Camera Listeners
  • The camera is set up to respond to mouse and keyboard inputs.
  1. Resources and Scene Instantiation
  • The resource manager loads necessary assets and resources; the scene renderer is initialized.
  • Multiple scenes can be instantiated: the scene assigned to the active_scene variable will be rendered.
  1. UI Instantiation
  • Add the UI components to the overlay window. This includes both general and scene-specific UI controls.
  1. Rendering Loop
  • The rendering loop continuously updates the scene in real-time:
    • The evolve(dt) function is called for each actor in the scene passing the time difference with respect to the previous frame dt as argument.
    • The scene_renderer.render(scene_state) function is called to render the scene, with a freshly updated scene_state object passed as argument. This object encapsulates information relative to the current frame to be rendered, like the current time and the value of interactive UI parameters.

The Core: scene_renderer.js

The main function in the SceneRenderer class is render(). This function orchestrates the rendering process by calling the appropriate shader renderers in the desired order. The scene renderer is responsible for managing the rendering pipeline and the order in which the shaders are executed.

A SceneRenderer object contains several instances of ShaderRenderer subclasses. Each ShaderRenderer object implements a specific rendering effect, such as Blinn-Phong shading (BlinnPhongShaderRenderer), shadow mapping (ShadowMapShaderRenderer), or reflections (MirrorShaderRenderer). ShaderRenderer objects are described in more detail in the dedicated section.

Inside the render() function, the scene is rendered in multiple stages:

  1. Camera Setup
    • Update the camera aspect ratio and the transformation matrices for all objects.
  2. Base Render Passes
    • Render the scene into a texture named base (see the Buffer Rendering section). This includes:
      • Clearing and pre-processing buffers.
      • Rendering the background, terrain, and objects using appropriate shaders (e.g., Blinn-Phong).
      • Optionally rendering reflective objects.
  3. Shadows Render Pass
    • Render shadows into a texture named shadows.
  4. Compositing
    • Combine the base and shadows buffers using a mixer to produce the final output.

Each rendering pass in stage 1 and 2 – e.g., terrain, Blinn-Phong, reflections, shadows – has its own render() function. These functions:

  • Iterate over the objects in the scene and exclude objects that don’t meet certain criteria (e.g., not reflective, not Blinn-Phong shaded).
  • Fetch object-specific data (e.g., mesh, textures, transformation matrices).
  • Pass rendering data to the pipeline. See the Shader Interfaces section for more details.

Buffer Rendering

Instead of pushing the result of a rendering call directly to the canvas, one can decide to store it in some intermediate buffer for later use.

Inside a SceneRenderer object, the member functions create_texture_and_buffer and render_in_texture can be called to create a new buffer and render into it, respectively. The texture function can later be used to retrieve the texture data for further processing by passing the texture name as an argument. Directly refer to scene_renderer.js for more details: the function signatures are documented in the code.

For an example of how to visualize a buffer on screen, see the Procedural Texture Generator section.

Shader Interfaces: shader_renderer.js

A ShaderRenderer object is responsible for rendering a specific effect, such as Blinn-Phong shading, shadow mapping, or reflections. Specific computations can also be associated with a ShaderRenderer object, such as noise generation or procedural texture generation. Each ShaderRenderer subclass is associated with a pair of GLSL shader files, vert.glsl and frag.glsl, which implement the specific rendering effect. For example, BlinnPhongShaderRenderer uses blinn_phong.vert.glsl and blinn_phong.frag.glsl. We suggest you respect this convention when defining new shaders.

Upon instantiation, a ShaderRenderer class calls the init_pipeline() function to create a regl pipeline – also known as a command – which is stored in the pipeline member variable. The instatiation call is done via the regl() function, which can take a collection of parameters as input. Such parameters include, for instance, the vertex and fragment shaders, the blend mode, the depth settings, the attributes, and the uniforms. The uniforms case is particularly relevant. You should check, for example, how BlinnPhongShaderRenderer overrides the uniforms() function to provide view matrices, light, and material data to the shader.

The render() Loop

We can now call the pipeline to render the scene on canvas.

In particular, the render() function from each ShaderRenderer subclass:

  • Iterates over all objects in the scene.
  • Prepares an array of data called inputs to be passed to the pipeline. These data can include mesh geometry, transformation matrices, light information, and material data.
  • If needed, excludes objects based on material criteria. For example, a fully reflective object will only be shaded by a MirrorShaderRenderer.

Shaders: vert.glsl and frag.glsl

Once the shader_renderer calls the pipeline, things are up to glsl shaders. This is the bottom of the pipline, where the visual information is computed and drawn on a support, be it the canvas or a buffer.

Please refer to the homework handouts, in particular to the tutorial in homework 3 (GL1), for more details on how GLSL shaders work.

3. Additional Components

Scene

A scene is a collection of objects, lights, and a camera that together determine the look of the image rendered on the canvas.

The Scene class defined in the file scenes/scene.js provides a template for creating new scenes. The DemoScene class in scenes/demo_scene.js is an example of a scene that extends the Scene class. You can edit the TutorialScene class to get familiar with the basic aspects of scene definition, see the Tutorial section for more details.

The main fields of a Scene object include:

  • camera: The main camera of the scene, by default a TurntableCamera object.
  • lights: A list of light sources, by default point lights with a color and a position.
  • objects: A list of objects to be rendered in the scene. Each object has a scale, translation, mesh_reference, and material property.
  • actors: The set of objects that dynamically evolve with time. Each actor must provide an evolve() function that updates its state.

Three helper functions are provided in the Scene class template and can be overriden in specialized subclasses:

  • initialize_scene(): Defines the objects and actors in the scene.
  • initialize_actor_actions(): Defines the dynamic behavior of each actor by setting its evolve() function.
  • initialize_ui_params(): Adds scene-specific UI parameters for interactive control of selected scene elements.

Procedural Texture Generator

Procedural texture generation is a technique used to create textures algorithmically rather than loading them from a file. In homework 6, we used noise textures to procedurally generate a 2D terrain mesh.

The ProceduralTextureGenerator class defined in render/procedural_texture_generator.js provides two main functions:

1.compute_texture(): Compute procedural textures in dedicated shaders, managed by a NoiseShaderRenderer. The shaders we provide as starting point have the same structure as the ones used in homework 6. 2. display_texture(): Use a BufferToScreenShaderRenderer to display the computed texture, which is stored in a framebuffer, on screen. This can be a useful feature for debugging and code validation.

Note: the ProceduralTextureGenerator constructor requires a ResourceManager object to make the dynamically generated texture available to other parts of the pipeline.

Resource Manager

The ResourceManager class defined in scene_resources/resource_manager.js is responsible for loading the shaders, textures, and meshes, which are located in the src/render/shaders and assets/ directories. To load a new resource, you first have to add it to the list defined in shaders_to_load(), textures_to_load(), or meshes_to_load(). The function load_resources(), usually called in main(), is responsible for the actual loading of the resources. All the loaded resources can then be retrieved during the render process with the help of the get() function.

Camera

The camera is the object that makes rendering on screen possible by projecting the 3D scene to the 2D canvas.

In scene_resources/camera.js we provide an implementation of the TurntableCamera you should already be familiar with from the homeworks.

The camera is in charge of updating the camera transformation matrices (update_cam_transform()) and updating the transformation matrices for all objects in the scene (compute_objects_transformation_matrices()).

The camera comes with some helper function to handle UI interaction, such a moving and zooming, and set predefined views.

Materials

The file materials.js contains the implementation of the base Material class and of some simple specialized material types, like DiffuseMaterial and ReflectiveMaterial.

To instantiate a specific material, an object of the corresponding material type must be decleared. Some examples are already provided: pine shows how to create a material based on a texture saved in a .png file, while gray is a monochromatic DiffuseMaterial. Alternatively, monochromatic textures can also be defined directly in the code: see e.g. the gold material and the make_texture_from_color() in ResourceManger. More advanced material types, like the TerrainMaterial, can contain additional properties like the color of water, grass, and mountain peaks.

After the material is created, it can be assigned to an object in the corresponding scene by setting the object’s material. Check demo_scene.js and look for the MATERIALS keyword to see how materials are used in practice.

4. Overview of the Source Code

  • cg_libraries/: Helper functions.
    • cg_math.js: Additional functions for JavaScript vectors and martices.
    • cg_mesh.js: Functions for simple mesh generation and mesh loading into regl buffers.
    • cg_mesh_render_utils.js: Functions commonly used by ShaderRenderer objects.
    • cg_screenshot.js: Function to download the current canvas as an image.
    • cg_web.js: Functions related to the web browser (content loading, shortcut binding, interface creation).
  • render/: Classes and methods used to render the scene.
    • shader_renderers/: JavaScript files that bridge between the main code (.js) and the shaders files (.glsl).
      • blinn_phong_sr.js: Render a scene with Blinn-Phong shading.
      • buffer_to_screen_sr.js: Display the content of a buffer on screen, used for example in ProceduralTextureGenerator.
      • flat_color_sr.js: Render a scene without shading, only a flat base color or texture, used to render the background environment.
      • map_mixer_sr.js: Mix a texture containing shading of the scene, but no shadows, and a texture containing cast shadows.
      • mirror_sr.js: Render an object with mirror effect using a cube map (see the EnvironmentCapture class).
      • noise_sr.js: Compute a variety of different noise functions and write the results into a buffer, used in ProceduralTextureGenerator.
      • pre_processing_sr.js: Preprocess the scene by applying pure black color to all objects and filling the z-buffer.
      • shadow_map_sr.js: Compute the distance of a fragment from the viewpoint, can be used by a cube map.
      • shadows_sr.js: Compute shadows cast by all objects in the scene with the help of a cube map (see the EnvironmentCapture and ShadowMapShaderRenderer classes).
      • terrain_sr.js: Render a scene with Blinn-Phong shading, mixing colors depending on the height of the fragment, used for 2D terrain.
    • shaders/: frag.glsl and vert.glsl files implementing the core of the shader renderers described above.
    • env_capture.js: Class implementing environment capture, computes and stores a cube map, used by both MirrorShaderRenderer and ShadowsShaderRenderer.
    • materials.js: Classes defining different parametric materials; specific material properties can be checked by the exclude_objects() function of a ShaderRenderer to skip them when rendering certain effects.
    • procedural_texture_generator.js: Class for procedural asset generation.
    • scene_renderer.js: Top-level class in charge of rendering the final scene.
  • scene_resources/: Classes and helpers used for scene generation and viewing.
    • camera.js: Class implementing a turntable camera, which computes transformation matrices to project the 3D space into the 2D canvas displayed on screen.
    • resource_manager.js: Class that handles loading and storage of the resources, like meshes, textures, shaders, etc.
    • terrain_generation.js: Helper functions for procedural terrain generation.
  • scenes/: Classes defining the rendered scene.
    • scene.js: The base class implementing the scene interface.
    • demo_scene.js: An example class that extends the base class and implements the demo terrain scene.
  • main.js: The entry point of the code to be executed on the browser, defines the regl canvas, instantiates scenes and interface, and executes the regl.frame() render loop.

5. Tutorial

This section provides a step-by-step guide to the assemble of your first scene. Your starting point will be the template file scenes/tutorial_scene.js, which is mostly empty at the beginning of this tutorial. It will be your task to fill in the missing pieces.

Some of the following instructions are intentionally left vague to encourage exploration and experimentation. You can take the DemoScene class as reference to check how some of the tasks described below can be implemented. We also provide a set of hints in case you do not know how to proceed. If after checking all these sources you are still stuck – e.g. you cannot import an asset, or get some cryptic error message – ask for help!

  1. Instantiate the scene.

    1. In main.js, instantiate a TutorialScene object and assign it to the active_scene variable.

    2. Open index.html: you should see a black screen.

    Tutorial Scene Step 1

    Black screen corresponding to the initial TutorialScene.

  2. Add an object.

    1. Generate a simple mesh in Blender, export it as an .obj file, and save it in the assets/ directory. The file assets/suzanne.obj is the test mesh we used to illustrate this tutorial. You can use it to compare your results with our screenshots.

    2. Load the mesh in the ResourceManager by adding it to the meshes_to_load() function in resource_manager.js.

    3. Define a new object in the initialize_scene() function. An object is essentially a list of attributes. One of the attributes is the object’s mesh: assign the mesh loaded from file to the object.

    4. Assign a material to the object.

    5. You should now see your object on the screen. Why are some parts of the object dark, but none is completely black? Make sure you understand where the light contributions come from.

    Tutorial Scene Step 2

    Suzanne using the gray diffuse material.

  3. Add a skysphere.

    1. Generate the sphere mesh. Instead of importing the mesh from file, in this case it is more convenient to use the function cg_mesh_make_uv_sphere() to generate the mesh procedurally in JavaScript.

    2. Add the mesh to the available resources in the ResourceManager by calling the add_procedural_mesh() function.

    3. Define a new skysphere object in the initialize_scene() function. Assign the mesh to the object.

    4. Assign the sunset_sky material to the skysphere object.

    5. You can now create a custom environment map. Declare a new material of type BackgroundMaterial in materials.js and assign a custom texture to it. Assign this material to the skysphere object.

    Tutorial Scene Step 3

    Suzanne with a skysphere.

  4. Convert your object into a perfect mirror.

    1. In materials.js, define a mirror material of type ReflectiveMaterial.

    2. Assign this material to the object.

    Tutorial Scene Step 4

    Suzanne rendered using an instance of a ReflectiveMaterial.

  5. Animate the object.

    1. Actors are objects that can be dynamically updated. Add your object to the set of actors. Note: you will get an error in the console until you complete the actor’s initialization by defining the evolve() function, see next step.

    2. In initialize_actor_actions(), define an evolve() function for the actor. This function can, for example, update the object’s scale or translation fields to make it grow or move around the scene.

    Animated Suzanne with dynamic scaling.

  6. The actions of each actor can be dynamically updated. Add some custom handles to the UI to control the behavior of the actors in the scene.

    1. Set a camera preset view and assign it to a key. You can use the create_hotkey_action() function to bind a key to a specific action. In this case, you should call this.camera.set_preset_view() whenever the key, e.g., 1, is pressed.

    2. Add a slider that controls the vertical displacement of the object. Use the create_slider() function.

    An interactive slider controls Suzanne’s position.

  7. Add a button to turn on and off the mirror effect.

    1. Add a boolean variable is_mirror_active to this.ui_params to store the state of the mirror effect. Set it to true by default in initialize_scene().

    2. Add a button to the UI that toggles the value of is_mirror_active when pressed. You can use the create_button_with_hotkey() function.

    3. In scene_renderer.js, locate the call to this.mirror.render() and add a condition to check if is_mirror_active is true. If the flag is set to false, skip the rendering of the mirror effect.

    Tutorial Scene Step 7

    Suzanne rendered using a ReflectiveMaterial with mirror reflections deactivated.

  8. Add textures.

    1. Why is the object fully magenta when the mirror is turned off? Check the materials.js file and make sure you understand what is going on.

    2. In materials.js, modify the ReflectiveMaterial class to support textures.

    3. In materials.js, modify the instantiation of the mirror material to load a texture from file. If loaded correctly, when the mirror is deactivated, the texture will be displayed in place of the magenta shading.

    Tutorial Scene Step 8

    Suzanne rendered using a textured ReflectiveMaterial.
    When reflections are deactivated, the underlying texture is shown.
    Here, the texture is the procedurally generated marble pattern from PG1.

  9. Add a shader to visualize the object normals using a false color encoding.

    1. Add a file in src/render/shader_renderers/ implementing a new NormalsShaderRenderer class.

    2. Add the corresponding .glsl shaders in src/render/shaders/.

    3. In scene_renderer.js, define a NormalsShaderRenderer object in SceneRenderer’s constructor. Use it in the render() function to render the normals in the base buffer.

    4. Optionally add a button to turn on and off the rendering of the normals.

    Tutorial Scene 9a Tutorial Scene 9b

    Left: Suzanne rendered with normals in world space.
    Right: Suzanne rendered with normals in view space.

  10. Visulize a buffer by rendering its content on canvas.

    1. In SceneRenderer, shadows are first rendered in a buffer (shadows), and later mixed with the shaded scene (base) in the compositing step. Visualize the shadow texture on screen.

    2. You can visualize any kind of content that is stored in a buffer on canvas. We provide helper function to visualize, for example, the content of a cubemap, as seen in homework 5. Look for env_capture.visualize() in SceneRenderer: after uncommenting the corresponding line, you will see the the cubemap associated to your mirror object apprearing on screen.

    Tutorial Scene 10a Tutorial Scene 10b

    Left: Visualization of the shadow buffer.
    Right: Visualization of the environment cubemap.

Congratulations! You have completed this introductory tutorial.

Remember that this is just the starting point. Some of the new features you will implement, especially the ones worth more points, will require substantial modifications of the code. These modifications will go beyond simple edits and adaptations of existing classes. They will require you to think about how to modify existing function signatures, introduce original functions, and add classes and abstractions to implement advanced behavior of all the GPU pipeline components.

Have fun!

Hints

1.1 Define a TutorialScene object.

You can define a TutorialScene object as follows:

const tutorial_scene = new TutorialScene(resource_manager);

Note that, differently from the DemoScene class, we do not need a ProceduralTextureGenerator: we only pass a ResourceManager as an argument. You can now assign the tutorial_scene variable to active_scene.

2.1 Generate a simple mesh in Blender.

You can generate a simple mesh in a Blender scene by clicking Shift + A and selecting the Mesh option.

It can be a good idea to instantiate a shape without symmetries to better understand how the coordinate system works. You can, for example, start from a cube and edit it following the Blender tutorial from homework 0, or directly load the castle mesh you got back then.

When exporting the mesh, make sure you check the Selection Only box in the export settings. To preserve the coordinate system, you might also want to switch from the default -Z forward to Y forward; as you do this, the Up vector will also automatically switch from Y up to Z up.

2.2 - 2.4 Add an object to the scene.

Check the DemoScene class to see how new objects can be declared, and corresponding meshes and materials assigned.

2.5 Understand the light sources.

The light currently comes from two sources: a point light and the ambient light. Search in the TutorialScene and Scene classes respectively to identify how these components are defined.

3.1 - 3.4 Add a skysphere.

Check the DemoScene class to see how to procedurally generate the sphere mesh and how to instantiate a skysphere object.

3.5 Add a custom environment map.

Poly Haven is a good source for free, CC0-licensed textures. When downloading an HDR image for environment lighting, you will often find formats like .hdr or .exr. These formats are not natively supported by our framework, which reads an environment map as a standard texture, but you can easily convert the file to .png or .jpg using software like GIMP or Photoshop. Many online converters exist as well. On Poly Haven you can download a high-res, low dynamic range version of the image in .jpg format as shown in the screenshot below. Be careful: the image size can get very large for high-resolution maps.

Poly Haven HDRI Download
4.1 Define a mirror material.

Defining an instance of fully reflective material is as straightforward as:

export const mirror = new ReflectiveMaterial({});

We provide code to build and sample the cube map used for environment capture. Make sure you check the MirrorShaderRenderer and the corresponding mirror.*.glsl shaders in case you want to customize the reflection effect.

5.1 - 5.2 Add an actor and define its evolve() function.

Check the DemoScene class to see how to define and handle actors.

The following evolve() function was used to generate the animation in the demo gif:

    this.phase = 0;  // local phase tracker

    this.actors["object"].evolve = (dt) => {

      // Advance phase using dt (control speed via frequency)
      const frequency = 0.25; // oscillations per second
      this.phase += dt * 2 * Math.PI * frequency;

      // Keep phase in [0, 2π] to avoid overflow
      this.phase %= 2 * Math.PI;

      // Procedurally animate the object
      const grow_factor = 0.2;
      const scale_new = 1 + Math.cos(this.phase) * grow_factor;
      this.actors["object"].scale = [scale_new, scale_new, scale_new];

    };
6.1 Add a camera preset view.

For the scene screenshots, we used the following camera preset view:

    // Set preset view
    create_hotkey_action("Preset view", "1", () => {
      this.camera.set_preset_view({
        distance_factor : 0.2,
        angle_z : -Math.PI / 2,
        angle_y : 0,
        look_at : [0, 0, 0]
      })
    });

This code snippet should be added to the initialize_ui_params() function, see DemoScene for reference.

6.2 Add a slider that controls the vertical displacement of the object

The slider can be instantiated in the initialize_ui_params() function as follows:

    // Create a slider to change the object height
    const n_steps_slider = 100;
    const min_obj_height = 0;
    const max_obj_height = 1;
    create_slider("Suzanne's position ", [0, n_steps_slider], (i) => {
      this.ui_params.obj_height = min_obj_height + i * (max_obj_height - min_obj_height) / n_steps_slider;
    });
Make sure you add obj_height to this.ui_params when initializing the scene. For the slider to have an effect on the scene, you also need to modify the evolve() function of the actor such that the translation field is updated accordingly.
7.1 - 7.3 Add a button to activate/deactivate the mirror effect.

After defining:

    this.ui_params.is_mirror_active = true;

in initialize_ui_params(), you can add the button in initialize_ui_params() as follows:

    // Create a button with a shortcut key to enable/disable mirrors
    create_button_with_hotkey("Mirror on/off", "m", () => {
      this.ui_params.is_mirror_active = !this.ui_params.is_mirror_active;
    });

You will now see the button appearing in the interface. To link it to its function, you still need to edit the SceneRenderer class. Inside the render() function, you can access UI parameters with scene_state.ui_params. Run the mirror render pass only if scene_state.ui_params.is_mirror_active is true.

8.2 - 8.3 Load and assign textures

You can check how textures are handled, for example, by a DiffuseMaterial. Your improved ReflectiveMaterial should work analogously.

Remember that whenever importing a resource from file, like for example a .png image, you first need to add it to the ResourceManager to make it available to the rest of the code.

9.2 Add the shaders to render normals as false colors

You can reuse the shaders you implemented in homework 4. Minor adaptations in variable names might be required.

Experiment with and without the transformation in view space (mat_normals_to_view). Make sure you understand the difference between the two cases.

10.1 Visualize the shadow buffer

In SceneRenderer, the last call to a ShaderRenderer’s render() function that outputs the result on canvas is what is displayed on screen. Additional calls will overwrite the content that is already displayed. To visualize the content of the shadow buffer it is then enough to call

this.shadows.render(scene_state);

right before SceneRenderer’s render() function returns.