CS-341 Computer Graphics 2025
The goal of this project is to solve an interesting problem in the domain of computer graphics. You will use notions learned during the course and apply the skills and techniques acquired by solving the homework assignments.
You will make a project proposal, research ways to solve the chosen problem, organize and schedule your work, implement your solution, edit a trailer video, present your results in front of the class, and write a final report. The specific scope of the problem, such as the scene to render or simulate, is left to your discretion. A successful project can showcase, for example, real-time rendering, procedural scene modelling, data visualization, simulation of a physical system.
We provide you with a list of features that can be implemented to
extend the initial WebGL/regl framework. You will select a
subset of these features and implement them, effectively creating a
custom WebGL application. Your final grade will be determined by:
See the Grading Section for more details on the evaluation criteria.
You will work in the same groups of the homework assignments.
| What | When |
|---|---|
| Project proposal due | Wednesday, April 16th at 14:00 |
| Milestone report due | Wednesday, May 7th at 14:00 |
| Final video due | Monday, May 26th at 14:00 |
| Final report due | Friday, May 30th at 23:59 |
Note that the last two deadlines are not on Wednesday at 14:00.
We provide you with templates to compile:
The templates are available on Moodle. Please refer to the dedicated sections for more details.
To ease the compilation of the Markdown files into
html we provide you with a minimal Makefile
using pandoc.
After installing pandoc,
simply run
makefrom the root directory to generate the
<template>.html file, where
<template> can correspond to proposal,
milestone, or final. You can then open the
html file in the browser.
We provide you with a framework that you can use as a starting point for your project. You can download it from Moodle.
The framework is implemented in JavaScript and GLSL, and uses the
regl library for WebGL rendering. It is designed to be
modular and extensible, allowing you to add new features and effects. A
set of basic features, such as camera control, lighting, and the basic
rendering techniques you worked on in the homework assignments, are
already implemented.
At the beginning of the project you will have time to get familiar with the code, read the documentation, and do a tutorial to understand how to use its main features.
For more information, we refer to the framework documentation included in the archive you downloaded from Moodle.
You are encouraged to use external resources (e.g. papers, blog posts, tutorials, etc.) to help you with your project. Here is a list of resources that can be useful to learn more about WebGL and get some inspiration when deciding the theme of your project:
For more details on the use of third-party code see the External Resources, AI, and Collaboration Policy Section.
Due date: Wednesday, April 16th at 14:00
You will write a project proposal detailing the goals you want to achieve and the features you plan to implement. You will submit the proposal on Moodle, we will review it and provide feedback.
A proposal template is included in the archive available on Moodle. It is structured as follows:
- Title
- Give your project a title that reflects the main idea or theme.
- Abstract
- Summarize your project in a few sentences. Here are some questions that can help you describe your project:
- What do you want to achieve?
- How do you plan to do it?
- Why do you think it is interesting?
- Which new techniques or algorithms do you plan to add to the
reglframework?- Which are the technical challenges you expect to face?
- Add one (or more) images illustrating the effects, the type of scene, the phenomena you want to reproduce.
- Features
- List the features you want to implement, according to the list we provide (see the Features Section).
- Schedule
- Organize and plan the tasks and subtasks that your team will perform. Give a detailed plan on what you expect to accomplish in each week. Specify which member of your group will be responsible for each task.
- Resources
- List the resources (e.g. books, papers, blogs, online tutorial) that you plan to use when implementing your project.
- Note that using external libraries is generally not allowed (see the External Resources, AI, and Collaboration Policy Section).
If your plans change as you start working on the feature implementation, please send us an updated proposal as soon as possible. Do not wait until the milestone report!
Due date: Wednesday, May 7th at 14:00
The milestone report is a progress update you will submit halfway through the project. It is an opportunity for you to receive feedback on your progress. It is also your last chance to adjust your work plan, in case this is needed.
A milestone report template is included in the archive available on Moodle. It is structured as follows:
- Progress Summary
- Summarize what you have accomplished so far.
- Show some preliminary results.
- Optionally present the validation of any feature you have already implemented. This is not mandatory, but it can help you get useful feedback for the final report.
- Report the number of hours each team member worked on the project.
- Comment on the accuracy of your workload estimates. Critically reflect on your plan and assess if you are on track.
- Schedule Update:
- Acknowledge any delays or unexpected issues, and motivate proposed changes to the schedule, if needed.
- Present the work plan for the remaining time.
Note that the purpose of the milestone report is not to showcase a finished project—even if successfully completed features are clearly welcome to be discussed—but to prove us that you either are right on track or have restructured your plan to finish your project in time.
You will submit the milestone report on Moodle. We will review the report, grade it, and provide feedback. This grade will contribute to 10% of your final project grade.
Due date: Monday, May 26th at 14:00
The last lecture of the semester will be devoted to the presentation of the project results. You will submit a short video showcasing your results. The video will be played back-to-back with the videos from other groups. You will illustrate and discuss its content live.
The content of the video is up to you and depends on what format you think will best convey your achievements. The video can be a compilation of the results obtained, or a short animation that shows the effect of all the features you implemented in a single coherent scene. Depending on the project scope, it can contain some technical explanations. For instance, the first half could consist of an overview of an interesting implementation aspect of your project, and the second part could show results, either with an animation or as a slideshow if your project is not animated. In general, we suggest keeping the technicalities for the report, and focus primarily on assembling a visual compendium of the rendering capabilities of your application in the video.
More details about the presentation format will be provided later in the semester.
We will grade your video according to the criteria discussed in the Grading section. This grade will contribute to 20% of your final project grade.
During the final presentation you will be asked to vote the three best projects. Each member of the three teams that will be voted the most will be awarded a certificate acknowledging the quality of the work done. This voting is done by the students, and will not contribute the final grade. Instructions for the voting process and other organizational details will be announced before the presentation.
Due date: Friday, May 30th at 23:59
The final report is the main document on which we will base our evaluation of your work. The grade of your report will contribute to 50% of your final project grade.
A final report template is available on Moodle. It is structured as follows:
- Abstract
- Add a one-paragraph summary of the entire project.
- Overview
- Describe the problem you decided to solve, the effects you decided to model, or the scene you decided to render more in detail. Add some images or videos highlighting the most interesting aspects of your project.
- Feature validation
- List the features from your project proposal in the table. Mark them as completed, partially completed, or missing.
- For each feature:
- Describe how you implemented it. You can assume the reader is familiar with computer graphics, but imagine they never implemented the algorithm you are describing. You should be concise, clear, and complete. The information you provide should be enough to implement a minimal working version of the algorithm. When presenting complex features, explain your design choices and the reasons behind them (e.g. which new shaders you introduced, how you structured the pipeline, if there is any interaction between different shaders, if your implementation runs on CPU or GPU, etc.). In case results are not as expected, document the problems you encountered and explain your efforts to solve them (see also the Discussion section).
- Add images and/or videos that show how the current feature works, and discuss them. Include as many visuals as you see fit. You can introduce simplified test scenes designed to best isolate the behavior of the current feature. If there are parameters that control an effect, show how each of them individually affects the result by comparing different parameter values. Whenever possible, provide an image/video with the feature turned off as reference. Additional information like running time, memory usage, or other performance metrics can optionally be included here in the form of plots, graphs, or tables.
- Discussion
- Present any additional components that are not part of the features list. This can include, for example, a restructuring of the code base, an extension of the GUI, the inclusion of external libraries for additional effects, etc.
- Recap any failed experiments. You might have already discussed some in the Feature validation section. Here you can summarize them, and additionally discuss attempts to implement features that you decided not to include in the final report.
- Discuss general difficulties you might have encountered (e.g. working with a complex framework, learning a new programming language, managing team efforts, etc.) and how you tackled them.
- Contributions
- Report the number of hours each team member has dedicated to the project. Comment on the accuracy of your initial workload estimates.
- Report the percentage of the total workload accomplished by each team member. The contributions should be consistent with—but do not necessarily need to mirror exactly—the hours spent working on the project. Please provide a brief explanation in case major discrepancies exist.
- References
- List all the resources used in your work.
You will submit the final report and the code of your project on Moodle. For more details on the deliverables, please refer to the Submission Instructions Section.
The use of external libraries different from the ones already included in the framework (e.g. gl-matrix) is in general not allowed. Libraries that:
are an exception to this rule. Some features mention that the use of specific libraries of the first kind is allowed for the implementation of that specific feature (see the Features Section). If you think you need to use a library that is not mentioned in the feature description, please contact us. If in doubt, ask. To get our approval, you will need to motivate your request.
Using external assets (e.g. meshes, textures, etc.) is allowed provided they are properly credited, and the associated licenses are respected.
Using AI tools like ChatGPT, Copilot, Gemini, Midjourney, etc. is allowed. You must always clearly document all the resources you use in your proposal and final report, AI tools included.
A well curated list of resources is a valuable aspect of your report that should not be underestimated. Failure to cite relevant sources, in particular for implementation details, will be considered a violation of the EPFL honor code, and can result in a grade penalty, a disciplinary action, or failure of the course. We will be using automatic plagiarism detection software to check for violations of the academic integrity policy.
You are encouraged to discuss your project with other groups, but all the deliverables you submit must be your own or that of your group. Any member of the group can be asked to explain any part of the project, and should then be aware of all aspects of the project, including implementation details.
The final grade will be determined by:
Each of the components will be graded on the usual scale from 1 to 6 in 0.25 increments. The final grade will be the weighted average of the four components, rounded to the nearest 0.25 increment.
The milestone report will be graded based on the quality of your progress summary.
If you are on track, you should provide us with evidence of the work done so far. If you encountered difficulties, we expect you to have identified them. You should clearly explain which are these difficulties, why they were not expected, and how you plan to overcome them.
The grade will take into account your ability to manage the project, to identify and solve problems, and to communicate effectively with us, for example by asking for help soon enough to have time to adjust your plan.
We will grade the video based on the visual quality of the results, the quality of the video editing, the overall clarity of the presentation, and the ability to condense the most important information about your project in a short and effective visual teaser.
The final report will be the means by which we will evaluate whether you successfully implemented a feature. It is therefore crucial that you document the validation of your implementation in detail in the final report.
A feature implemented correctly but not properly validated will be worth substantially fewer points than a feature that has only been partially implemented, but for which extensive tests are included in the report and failure cases are thoroughly documented.
Example
Consider the Blinn-Phong implementation from the homework assignments. A good series of tests to convincingly validate the correctness of the method would include:
Assuming the implementation of such feature is worth 10 points, omitting the evaluation of the shininess parameter could, for example, result in a 3-point penalty. Only showing the rendering of a complex scene—without validating each material property individually—would substantially reduce the score associated with this feature, bringing the point count down by 6 or even 7 units.
Whenever possible, including a comparison with a reference implementation (e.g. the same scene rendered in Blender) would make the validation stronger and more convincing.
The code you submit should be well-tested, properly documented, and must reproduce the results you include in the report. The code has to run robustly. We might need to check how specific parts of your application work. If we cannot run the code, you will not receive the points for the features we could not test.
We will evaluate the quality of the visual results, the overall coherence of the implementation, and the originality of the theme.
All the deliverables can potentially contribute to this part of the grade: the project proposal, the milestone report, the final report, and the final video.
The submission of all the deliverables will be done through Moodle. It is your responsibility to meet the deadlines defined in this document. Remember that, in compliance with the same policy discussed at the beginning of the semester and used for the homework assignments, late submissions will not be accepted.
Due date: Wednesday, April 16th at 14:00
Download the proposal.zip folder from Moodle. Fill in
the template (see the Project Proposal
Section for more details).
Compress the directory into a zip archive named
proposal-group<i>.zip, where <i>
is your group number, and submit it on Moodle.
Due date: Wednesday, May 7th at 14:00
Download the milestone.zip folder from Moodle. Fill in
the template (see the Milestone Report
Section for more details).
Compress the directory into a zip archive named
milestone-group<i>.zip, where <i>
is your group number, and submit it on Moodle.
Due date: Monday, May 26th at 14:00
The video should have the following specifications:
mp4, H264 encoding.Note: The file size is capped at 100 MB. If your file is too large, you will have to compress it. A 60-second, mp4 video in Full HD will easily fit this size bound without perceptible quality loss. For video compression you can use ffmpeg, see an example here.
Name the file video-group<i>.mp4.
Submit the video on Moodle.
Due date: Friday, May 30th at 23:59
Download the final.zip folder from Moodle. Fill in the
template (see the Final Report Section for
more details).
Compress the directory into a zip archive named
final-group<i>.zip, where <i> is
your group number, and submit it on Moodle (Project Report
tab).
Note: The maximum file size is 250 MB. If your zip file is larger, this is likely due to videos. Raw screen captures are usually uncompressed. Convert your videos into mp4 files to reduce their size without perceptible quality loss. For video compression you can use ffmpeg, see an example here.
Compress your code directory into a zip archive named
code-group<i>.zip and submit it on Moodle
(Project Code tab).
The features you can implement are divided into two categories: rendering and modeling. You must pick at least two features from each category.
The features are divided into three levels of difficulty: easy, medium, and hard, worth 5, 10, 20 points, respectively. You need to pick features for a total of (at least) 50 points, with at least one hard feature.
The maximum number of points you can get from correctly implementing the features is capped at 50. You can pick more features than needed to reach this cap, but if you decide to do this you will need to adapt the number of points (i.e. decrease the points assigned to some features) to reach exactly 50 points.
Example
The following table shows a valid set of features with the amount of adapted points assigned to each:
| Feature | Points | Adapted Points |
|---|---|---|
| Feature 1 | 5 | 5 |
| Feature 2 | 10 | 5 |
| Feature 3 | 10 | 10 |
| Feature 4 | 20 | 10 |
| Feature 5 | 20 | 20 |
Each feature description contains one or more links to external resources that explain the specific effect, algorithm, or method you are asked to implement in more detail.
The list of features is not exhaustive. If you have an idea for a feature that is not listed here, please contact us with a proposal to discuss it.
Outlining

Object outlining (David Lettier)
Implement an outlining effect to highlight the boundaries of objects in the scene.
Fog

Transient fog effect (David Lettier)
Implement fog to add depth and atmosphere to the scene.
Depth of Field

Depth of field (David Lettier)
Implement depth of field to realistically simulate the effect of the camera focus: when objects in the foreground are in focus, objects in the background are not, and vice versa.
Bloom

Bloom effect (LearnOpenGL, image from Epic Games)
Implement bloom to simulate the effect of bright light sources bleeding into the surrounding area, creating a soft glow.
You can see example applications by searching for bloom on ShaderToy
Normal Mapping
Normal mapping (David Lettier)
Implement normal mapping to enhance surface detail and lighting.
Toon Shaders

Object posterization (David Lettier)
Implement advanced toon shading to give your scene a cartoon-like appearance.
The list of possible toon shaders is virtually endless (see e.g. examples on ShaderToy).
To get full points in this feature, you can either decide to implement a single tunable toon shader (for example, the user can select quantization levels and colors, the level of detail, etc. from the GUI) or multiple toon shaders with fixed parameters.
For an example of an advanced toon shader see e.g. X-Toon: An extended Toon Shader [Barla and Markosian 2006].
Soft Shadows

Comparison between hard and soft shadows (RenderMan engine)
Implement Percentage-Closer Soft Shadows to simulate the gradual fading of shadows associated to area lights and create more natural-looking lighting.
For more details see Percentage-Closer Soft Shadows [Randima 2005].
Screen-Space Ambient Occlusion
Screen-space ambient occlusion (David Lettier)
Ambient occlusion is a shading and rendering technique used to calculate how exposed each point in a scene is to ambient lighting. Implement screen-space ambient occlusion (SSAO) to approximate how close objects influence each other’s shading when illuminated by ambient light.
For a step-by-step tutorial we recommend LearnOpenGL/SSAO.

Screen-space reflections (David Lettier)
Screen-space reflection (SSR) is a technique used to approximate the color of a pixel on a reflective surface taking into account the surrounding scene. SSR works via a process called ray marching, which requires mapping quantities back and forth between screen and view space.
Implement screen-space reflections to simulate reflective surfaces like water or mirrors.
If you also implement screen-space ambient occlusion, SSAO and SSR together cannot be worth more than 30 points.
Deferred shading of spheres with hundreds of light sources (Hannes Nevalainen)
Forward shading is inefficient for scenes with more than a handful of light sources. Deferred shading allows rendering hundreds (or even thousands) of lights real-time with an acceptable framerate. The G-buffer storing the scene’s geometry information is the key component of deferred shading.
Implement G-buffer and deferred shading to improve rendering performance and enable more complex lighting effects.
See LearnOpenGL/Deferred shading for more details.
Mesh and Scene Design

Blender’s mesh primitives (Blender)
Design your own meshes and assemble an original scene in Blender.
Here are some potentially valid implementations of this feature:
Each of the above examples can be worth 5 points if executed in an excellent way and if thorough documentation is provided in the final report. You can also decide to mix different aspects, for example design several relatively simple objects yourself, assemble them into an initial scene, and then obtain a final scene by simulating the interaction of the objects with each other.
In this feature there is room for creativity and exploration. If you have a specific idea in mind different from the options listed above, feel free to describe it in the proposal.
Noise Functions for 2D Terrain Generation (Surfaces)

Example of mountains with sharp ridges generated with Perlin noise (Red Blob Games)
Experiment with variations of the noise-based terrain generation implemented in the last homework.
You can, for example:
For more examples, see e.g. Red Blob Games.
To get 5 points, you should implement and discuss multiple variations of the Perlin noise terrain generation (e.g. all the four examples above, or others of your choice).
Day/Night Cycle
Day/night cycle (ppictures)
Implement a day/night cycle to simulate the passage of time in your scene. For an example implementation using texture blending see the answer in this post on WebGL Fundamentals.
If you need more control over the lighting, you can optionally implement a more advanced version of the day/night cycle that uses LUTs (look-up tables) to control the color of the light, as explained in this tutorial by Shahriar Shahrabi. Note that the linked implementation natively uses Blender: you will need to adapt it to your WebGL framework. This extra option is not worth extra points.
Noise Functions for 3D Terrain Generation (Volumes)
Example 3D voxelized terrain generated with noise functions (F. Güzelant, E. Ipçi, F. Mikovíny, CS-341 2023)
Use Perlin noise (or another noise function, see e.g. Seph Gentle’s implementations) to procedurally generate a 3D terrain and render it in your scene.
Start from a 3D grid and generate a noise-based scalar field. Then quantize the scalar field to create a voxelized representation of the terrain.
You can optionally use the marching cubes algorithm to create a non-voxelized mesh from the scalar field. For this additional step you should look for a suitable external library implementing the marching cubes algorithm, and integrate it into your framework.
Procedural Texture Generation
Procedural texture created via cellular (Worley) noise (Kyle273 on ShaderToy)
Implement an algorithm for procedural texture generation different from Perlin noise.
You can, for example, implement Worley noise to create a cellular structure, or Gabor noise to create a texture with a more structured appearance.
Apply the procedurally generated texture to a surface in your scene.
L-Systems for Procedural Scene Generation
A building grown using a L-system (Michael Hansmeyer)
An L-system is a mathematical model for generating fractal-like structures and simulate the growth patterns through a set of rules and symbols that define how a shape evolves from simple initial conditions.
Implement an L-system-based algorithm and use it for generating trees, architecture, or other kinds of structures in your scene. For an interactive L-system web demo, see Andrei Kashcha’s implementation.
Bézier Curves
Bézier curves used for animating a camera path (Hamneggs on Shadertoy)
Bézier curves are used to model smooth curves in computer graphics. They can be applied to different tasks, from object modeling (see this tutorial on WebGL Fundamentals) to camera path animation (see Hamneggs’s demo on ShaderToy).
Implement Bézier curves. To evaluate the curve you can use the de Casteljau algorithm discussed in the lectures.
Irrespective of the application you choose, we suggest you validate the correctness of your implementation by showing a series of images or an animation in which the curve is dynamically traced, and the control points are highlighted. For an example, see maras’ demo on Shadertoy.
Wave Function Collapse

A randomly generated terrain autocompleted by wave function collapse (Maxim Gumin)
Wave Function Collapse is an algorithm to generate complex patterns from a simple input. The algorithm randomly selects compatible elements based on local constraints to create a coherent output, such as a tiled pattern or a level layout, that adheres to a predefined set of rules.
Implement the wave function collapse algorithm and use it generate buildings, maps, or other 2D or 3D tiled structures.
To get started, you can refer to BorisTheBrave’s tutorial. For an example implementation, you can have a look at Maxim Gumin’s 2D implementation and the corresponding 3D generalization. Lingdong Huang implemented an infinitely expanding live demo of the algorithm in two and three dimensions.
Particle Effects
The polygon shredder particle effect takes a set of cubes and turns them into confetti (Jaume Sanchez)
Particle effects are often used to simulate phenomena such as fire, smoke, sparks, and magic effects.
We suggest you start by implementing the billboard technique to render particles in 3D space. You should then implement particle instancing and allow the user to control particles’ parameters such as lifetime, size, and color.
See OpenGL-Tutorial/Particles and LearnOpenGL/Particles for more details.
Boids
Boids algorithm simulating flocking behavior (Ricky Reusser)
Boids is an algorithm used to simulate flocking behavior (see the dedicated Wikipedia page). The algorithm is based on three simple rules: separation, alignment, and cohesion. More complex behaviors can be achieved by adding additional rules or modifying the existing ones.
Implement the boids algorithm and use it to simulate the flocking behavior of birds, fish, insects, abstract particles, or any other entity in your scene.
As an initial example we recommend looking at Ricky Reusser’s implementation.
Wave Simulation
Wave simulation with ripple effect and interference (T. Norlha-Tsang, M. Kalajdzic, L. Desmeules, CS-341 2023)
Implement a spring-based 3D wave simulation. Compute the height-map of the water surfaces according to the spring equation. See this discussion on a 2D version of spring-based waves for more details.
This feature can be complemented by particle effects for creating realistic water splashes and screen-space reflections for rendering the water surface.
These are the final videos of the three projects that won the Best Project Award in 2024. We hope they will inspire you to create your own amazing project!
Where the Wilderness LivesThe videos are not publicly available and are intended for educational purposes only. Please keep them confidential and do not share them outside the course.