top of page
image.png

Dungeon Divers is a modern homage to the classic arcade game Venture (Coleco), developed in a custom Graphics Engine. In Dungeon Divers, players will play as a bee, trying to collect treasures and navigate through mazes filled with enemies. In order to beat the game all treasure must be collected.

Platform: PC
Playtime: 10 mins

Engine: Custom Graphics Engine

Graphics API: OpenGL

Roles: Graphics Programmer, User Interface Programmer 

Development Duration: March 2024 - May 2024

Team Size: 4

Download: Dungeon Divers

MY ROLE
 

  • Developed a Graphics Engine including the rendering pipeline, shaders, and asset management / importing.
     

  • Constructed a modular UI system that allowed for Menus, Text, and Buttons.
     

  • Created save states in .ini files to allow for high score saving, and loading of default values.
     

  • Devised a scoring structure within the UI system that informs the player of their current score, current enemies, and remaining treasures.
     

  • Collaborated with team members on integrating enemy AI, Gameplay, and Graphical / UI Systems.

IMPLEMENTING THE GRAPHICS ENGINE
 

Developing a graphics engine from scratch takes time and patience. To start this process, I created six world matrices (one for each side of a cube), then I used translations and rotation methods provided by gateware to orient each side to where I wanted them to appear in the renderer. I placed twenty-five vertexes on each side of each square, and used their locations to draw a line to a vertex on the other side. Since I treated each side of the cube as an individual object, I was able to use a loop to draw lines between the vertexes for each side independently as part of the rendering loop. After the vertexes run through the vertex shader, which transforms the vertexes to screen space, the result is a cube  that appears three dimensional. In additional to this, I used a camera matrix and input controls to move a camera within the 3D space to get a perspective of the cube.

image.png
ezgif.com-video-to-gif-converter.gif

Now that I have a functioning graphics pipeline, the next step was to see if I could get an asset into the renderer and get lighting to work. I used a logo provided by Full Sail University with the vertex data already present to develop this portion. After removing the world matrices for the cube, I replaced this with two world matrices, one for the text and one for the logo. I used a similar rendering technique, but instead of lines we're using indices to determine where our triangles need to render. 

In addition, we now will be using a buffer object to add ambient, Lambertian, and specular reflection lighting through the fragment shader.

IMPLEMENTING A TEST LEVEL
 

The next step was to develop a test level that could take an asset exported from blender, and render them on screen. To accomplish this, I used a python script that could work with specific file types and create the vertex data for us. This could be transformed into usable  object data in a renderer. I went with a object oriented approach, which required creating a model class where all the rendering would take place per object in the world. 

This was the order of which this updated rendering engine worked:

  • Take all model data from a text file, and upload it into a data structure in the render.
     

  • For each model in the level we now have an object we can use, with their own rendering method.
     

  • In the Level class, where the data structure for all models lives, we now run a loop that calls each  model's rendering method
     

  • As each model is rendered, its pushed through the rendering pipeline, runs through the shaders and is output to screen.

image.png
image.png

OTHER FEATURES
 

Some other notable features that were added:
 

  • Support for up to sixteen additional lights through an additional buffer passed to the fragment shader. In order to accomplish this, the asset export script was updated to export Spot and Point lights, and the model loader in the graphics engine would store this additional data and notate the object type.
     

  • Skybox was added to showcase the depth of the engine, in this case a single image was used to prototype this feature. The skybox was considered another object, and when run through the shaders we map the texture coordinates of an image as sides of a cube.

DEVELOPING THE USER INTERFACE
 

Since this was a custom graphics engine, developing the user interface had to use the existing rendering engine and have new functionality. In the early stages of this, I created a User Interface class that would eventually host several classes: Text, Buttons, and Menus. At first, I was placing simple shapes and learning how to place them into 2D space while keeping the rest of the objects in 3D space. 

image.png
image.png

After the ground work for placing objects in screen space was working, I began developing three basic classes: Menus, Text and Buttons. Menus would operate as a canvas to place text and buttons on, Text would act as UI elements such as hearts, score, and any text on screen, and Buttons would have on click functionality so you could open menus and check your controls, etc.

image.png
image.png

UI elements were structured to use similar locations based on their functionality, for example High Score, Score, and Level text are using the same location for every number, since we only render what is needed to show the score, or level to the player

bottom of page