Unity Learning Pt. I: Layouts for days

It’s no secret that I’m obsessed with video game UIs. I’m currently working on the menus and UI for more than a couple of indie games with some very talented people in my spare time. Every time I propose a cool UI for a game, the developers are quick to remark on the complexity involved in building it out and how difficult it is to actually do in Unity. I could never tell if it was the constraints/limitations of the toolkit, or if it was the developers’ inexperience with building UI layouts, or if it was just too time-intensive of a task for a programmer to undertake. This has resulted in some less-than-ideal interfaces being implemented in those games, and I knew there had to be a better solution than this.

So in order to get a clearer understanding of this situation, I started looking up how difficult it could actually be to do this as a designer. I’ve done complicated web design layouts in the past, this can’t be any worse, right? It looked like Unity’s layouts could all be created through a GUI where you just enter the right X/Y/width/height values, add some more properties, and watch them update on the game scene. This was comforting, so I decided to just jump right in and get my hands dirty with actually building layouts out myself in Unity.

Keep in mind, my goal here is not to become an all out Unity superstar where I can make my own game from scratch (although that’d be great). I simply want to understand how UIs are laid out in Unity and familiarize myself with what’s easy to do in Unity and what’s not. Once I’ve done that, I want to get a sense of how to go about building more complex interfaces and smoothly integrating them into the gameplay. That’s all. I wanted to get good enough to build the UI layouts I was proposing on my own and hand it off to the developers to hook it up to the right system events and button inputs. I bought this book, downloaded the latest version of Unity, and got to work.

Right off the bat, I knew it was going to be painful. Unity’s own UI is a mess. I’d say it’s even more intimidating than After Effects for beginners (yeah, you read that right). Regardless, I tried to stick to the instructions in the book and started following allowing as I was slowly introduced to how cameras and canvases controlled everything in Unity. Every UI element has to be on a canvas, and the canvas can be set to different camera modes (overlay, a different camera, or world mode). You can use Panels to start laying out different components of a HUD, and you can anchor these Panels to different parts of the screen. This alone already gave me a very strong sense of how the HUD elements on a screen are usually laid out, pinned to a specific corner of the screen with some manual positioning offsets.

I learned in-depth about resolution and aspect modes, and how all the UI properties had to be set in a way that they would scale properly and as expected when the screen size or resolution changes. After some more explanations about how Canvas Groups work and how the grouping changes between parent-child elements, I actually started building a basic health bar HUD with a character asset from some pre-made templates. A lot of properties had to be set correctly and I was glad to have it working. I could already see how it would come together when hooked up to the right code blocks that control how much damage the player takes and how the health bar would react in response to it.

Next, I started learning about grids. Horizontal grids, vertical grids, and grid layouts. These were mostly used in list items to select from a list of options or in an inventory selection system where you see a bunch of consumable items laid out. Unity provides a lot of tools to control the ordering, layout, and alignment of these grids. Nearly every game I’ve played has some sort of grid-based inventory management system, so it was cool to see how these actually work behind the scenes. Again, I was starting to see how to tweak these and re-order the objects in them based on game events, say when you pick up a new item and you want it to now be the first thing in the list with a “NEW!” label of some kind.

So yeah, that’s where I’m at right now. I’ve set up a bunch of canvases for different gameplay moments. For instance, I have a “Popup Canvas” that only triggers when a popup modal has to be shown on the screen (ex: pause menu). The HUD elements are all on the HUD Canvas, and the rest of the inventory screens are on a different Inventory Panel within the Popup Canvas. I’m getting a good understanding of how much prep work is actually involved in building these layouts, especially in games where you have a dynamic inventory system that keeps expanding as the game goes on. You need it to start off limited, and then expand as the player unlocks certain progression locks in the game. When it does expand, you need to specify how it expands and in what direction, what happens when content overflows, and how re-ordering or sorting works.

As a designer, this is also giving me a very good sense of how to actually slice assets for implementation within Unity. In the past, I’ve always used my best judgement or just ask the developers how they want the assets to be cut up and layered, but now that I know how these layouts are built, I know exactly what part of the background to split from the overlaid icon. I know exactly how to scale and layer them so that they behave as expected in the game. I know how to compile them all in a sprite sheet so that they use the least amount of memory while still being slice-able in Unity’s Sprite Editor.

And it’s about to get a lot more complicated. In the next chapter, I’m about to dive into some code to actually hook up UI events with gameplay inputs. I’m apparently going to be building a drag-and-drop interface on the inventory and make the UI respond to things that happen in the game. This is going to be exciting, because this is where it all starts to come together. I want to see how much my previous coding knowledge will help out here and where I’ll need to re-learn things. Either way, it should be fun and I’m excited to keep learning.

I have to say, on a more philosophical level, the act of learning a tool like this feels really good. It takes me back to the days where I used to spend days and days playing around in Photoshop by following tutorials and learning the inner workings of various effects and filters in the software. I had the same experience when I learned After Effects for the first time (still do), and it’s all happening again with Unity. It’s cool to think back on a game where it did something particularly well, see it explained in this book, and then go “ah, so that’s how they did that!”. That moment of realization where it starts to look like a series of linked actions and not some voodoo magic is where you start to believe that you could do this too.

Heck, I’m already starting to look at Game UIs/HUDs and am seeing a series of grouped rectangles anchored or pinned to certain edges of the screen. I recently saw these excellent slides from Omer Younas where he details best practices for AAA UI, and a lot of his slides talk about the same thing. There’s tons of design principles in there, but also technical considerations when implementing them in the engine. The good news is that it looks like these UI options available in Unity are transferrable to the Frostbite engine. And I wouldn’t be surprised if this is the case across all game engines.

Despite all my complains about Unity, it overall seems like a pretty powerful tool capable of many things, if you’re willing to put up with a bit of a learning curve (as with all things) and have a book walk you through its inner workings. I’ll continue to work my way through the book and get as far as I can, adding more blog posts about my progress and learnings as I keep going. I’m only a third of the way in and I’m already glad I started doing this. Learned way more than I was expecting in a much shorter timeframe than I was expecting to. Now on to actually playing with some code. See ya in Pt. II.