Imagine being able to build a huge world inhabited by hundreds if not thousands of players who were exploring worlds and monsters of your own creation, collecting rewards from quests that you’ve designed. Picture the amount of work that is needed to be done, all the departments, all the artists, the programmers, the writers, the producers. Now instead of a team of 50 or more people, having to do all of this work by yourself. The task itself is extremely daunting, but that is what I am seeing it to do with my game Dungeon Delvers, and the task is becoming a little less daunting with help from all of the new AI tools that are now available at our fingertips.
For a while now I’ve wanted to take a stab at game development on my own. The one project I’ve always wanted to undertake was the large task of building an MMORPG. The sheer size and scope of doing a full 3D world by myself seemed impossible until recently. I have been leveraging generative AI in almost all of my workflows and it has helped to speed up my work in some areas while introducing complications in others. I’m writing this post today to give insight on how I leverage generative AI to help create in-game assets, how it assists me in my programming and how it even helps with things such as the creation of my animations.
Programming:
Visual studio has an amazing integration called GitHub Copilot. You can use it to implement autocomplete suggestions from code that it’s pulled from various repositories all over GitHub. I’m primarily a front-end developer in my 9 to 5 job, so leveraging the suggestions that Co-pilot offers with my back end tasks has been insurmountable in allowing me to feel more comfortable with APIs that I am not as familiar with. Normally I use the tab autocomplete suggestions that GitHub co-pilot offers, but more recently I started leveraging the GitHub copilot chat to help me build out unit tests for my code. Unit tests have always been one of those things that I’ve had trouble writing. I always seem to forget some area of the code that needs coverage or some if statement that hasn’t been properly tested. I can say that GitHub co-pilot does an amazing job of scanning files that I’ve already written and then creating a unit test based off of that code to help ensure coverage. One of the first pitfalls that I found, however, was that out of the most of the unit tests that were generated, almost none of them ever passed initially. Normally this had something to do with the setup of my jest mock, so it would generate a test but I would have to go back and update the initial set up in order for tests to work the way that they were expected to. This proved to be both a blessing and a curse. The blessing was in the fact that I was getting a lot more code coverage right out of the gate than I feel like I would have normally done, which is a huge win in my books. The drawback was that I spent a lot of time trying to figure out why my tests weren’t working the way that they should be even though the code looked as if it should work. This has been a warning I have read online not to trust exactly what AI spits out, and how having a good understanding of programming is still extremely important. The good news, at least with the tests that the AI generated for the authentication server (which was a fairly standard Express server) didn’t seem to really require any updates to the actual logic that was being tested itself which was great. The jury is still out on writing unit tests for the more bespoke areas of the game: that of the game client, and the actual game server.
Assets:
A more controversial section than the last is my workflow for using generative AI to create both concept art, textures, as well as models for me to work off of. I’m going to go on record and say that as much as I wish, I am not the greatest artist. As such, being a solo developer I already have a stacked deck against me as I don’t have the time, resources, or skill sets to create models and assets that AAA game studios are able to create. The workflow that I use to create base mesh models is I go to Gemini and I ask it to generate for me a picture of a high quality 3D model (let’s say a human male) and I tell it to show me the front and back profile of the model wearing nothing but briefs. Sometimes it takes a little bit more prompting in order to get it closer to what I want look and feel wise. Once I find an image that I like, I am then able to go to Trellis where I’m able to upload an image and it will actually generate a 3D mesh based off of the image that it’s provided.

Sometimes the mesh doesn’t exactly match what I was hoping for, but sometimes doing another generation will provide better results that I can then use. This entire workflow has absolutely changed how I work. Before I was spending hours and hours watching YouTube tutorials on sculpting and anatomy, or just general Blender tutorials, and now I’m able to get a working base mesh that has all of that for me out of the gate as well as a texture that is generated from the image that I gave to this artificial intelligence. I am already able to get an entire working idea out the door in minutes where before it would probably take me days or weeks. One major hurdle is the model’s topology—the arrangement of its vertices, edges, and faces. AI-generated meshes often have uneven or inefficient topology, which can lead to visual distortions and performance issues in a game engine. For instance, the generated model of the human male had numerous triangles and stretched polygons, which would cause shading artifacts and make animation difficult.