In Pursuit of our Tone: Part 1 – Building a vibrant Lab

Hi. Rafael here, finally with another devlog about the game, as opposed to grants and other business stuff that no one cares about.

Guillaume and I have finally hired our first team mates (we’re 6 people now!), figured out their contracts, payroll and benefits, and started transitioning back into game development. Still, between having to manage our little startup now, having to go back and forth with the NSF and actively contributing to Tablecraft’s development, we always find a way to prioritize other things over writing devlogs. Even my attempt at streamlining the process by replacing written devlogs with podcast episodes, where all we had to do was talk about stuff, ended up getting neglected. RIP. 💀

My brain’s Department of Prioritization and Department of Self Justification released a joint brain-wide statement a while ago, notifying all other brain departments that there would be no new Tablecraft devlog posts from me in the foreseeable future. The announcement read “there is simply no time to write devlogs, we have a business to run and a game to make”.

Things would’ve worked out just fine, had it not been for John Ruiz, who, perhaps unknowingly, completely overthrew the status-quo when he recently started writing a devlog about the work that he’s doing.

“PWAAAAAAAAHHHHHHH”, was the sound of the sirens going off in my brain’s Department of Logic and Reason, after immediately realizing that for John’s devlog to make any sense, I would first have to contextualize things with a devlog of my own. Brain chaos ensued.

After a week or so of brain politics, a deal was finally struck: the Department of Logic and Reason is allowed to take control of my body for just long enough to pump out this devlog, and in return, it is forbidden from notifying my brain’s Department of Guilt and Responsibility the next time my brain’s Department of Self Justification decides it’s okay for me to go on a YouTube rabbit hole that lasts til 3 AM.

Now, with all the parties involved having finally consented to it, welcome to another Tablecraft devlog. 🖐

On the first devlog posted on this website (eons ago), I described how Tablecraft came to be. On that same post, I mentioned the pursuit of our tone, and what I wanted the game’s environment to feel like. At the time, I didn’t know what the environment should look like (and to a certain extent, I still don’t), but I already knew it shouldn’t feel like an enclosed space. So I did the natural thing and ripped out the ceiling of the Lab:

Night sky

Much later, when John and I were randomly discussing Tablecraft (before John joined the team) and I explained to him why I had ripped out the Lab’s ceiling, he asked “what about a treehouse, then?”.

I immediately liked the idea. It could explain the giant hole in the ceiling. Plus, treehouses are awesome. If it were socially accepted for adults to live in treehouses, and I owned a tree, and I came to terms with the idea of living off the grid with no access to convenient sanitation for an indefinite period of time, I’d build a treehouse and live in it. However, because none of those things are true, and I have no knowledge of carpentry, I decided to build the treehouse in VR.

My first treehouse prototype wasn’t pretty, but it kinda got the job done. When I got in VR, it felt a lot more welcoming than the old Lab’s cold gray walls. Most people seemed to agree, except perhaps for Reddit’s r/chemistry, who apparently loved the sterile Lab environment when I first shared it there, but then never really seemed to care anymore after I replaced the wall textures with wood. Though come to think of it, it could’ve also been the pooping Blobs that ruined it for them. It’s unclear.

Whatever the case, it was clear to me, and to most non-chemists who tried the game, that the treehouse idea was the way to go. So now it was time to figure out what a proper-looking treehouse lab should look like.

I went online and looked up labs and treehouses, and then I hired a freelance 2D artist (Heng Yi Hsieh) to come up with some sketches based on some guidelines I gave him. This was his first try, which he somehow put together in a couple of hours:

He also provided a diorama:

Very impressive. I then asked him to choose a couple of angles from his first person view sketch and to bring them to life with color. I must’ve goofed and failed to properly highlight the style I had in mind though, because he was clearly thinking of a different style. A spooky style:

At this point I had already almost ran out of all the money I could possibly spend on this. It didn’t take long. As a two-person company at the time (myself and Guillaume), we only had a laughably small amount of money, which we rarely used for anything other than going to conferences to showcase Tablecraft, purchasing Unity assets, software licenses and freelance work when necessary. So I told Heng Yi to take a break.

I decided to try to paint it myself, immediately taking off on a bit of a weird journey:

In the end, I settled with this comic-book inspired look:

I chose the comic-book aesthetic because I figured that if we managed to pull it off in VR, perhaps the game could make players feel like they’re comic-book superheroes, such as Iron Man, for example. That could be cool.

So this is where John Ruiz finally comes in again. John is a 3D artist. I showed him the 2D drawings and asked him if he would like to join our team as a permanent member. He did, and for the months following that, him and I slowly tried to bring the 2D concept to life in 3D, with John doing all the 3D modeling and me trying to figure out the colors and lighting in Unity.

Our first challenge was in figuring out what style of planks we should go with. Should the treehouse planks be straight or wonky?

While John worked on that, I did some tests in Unity to see what kind of comic-book aesthetics we could potentially try to pull off in-engine.

I experimented with 4 different styles: completely-flat (shown above and below, to the left), cell-shaded (shown above and below, in the middle), noir-shaded-with-textures (shown above, to the right) and smooth-shaded-with-textures (shown below, to the right).

I thought the noir-shaded-with-textures look was really comic-book appropriate, so I tried that on a couple of the machines in-game. This was the result:

If we ignore the blatant clash of artstyles in the screenshot above and simply focus on the machines sitting on the desk, I suppose this could’ve been one of the paths we could’ve taken with our tone. Instead of going for a welcoming, friendly approach where the player feels like a comic-book superhero, we could’ve gone with this noir look instead, and potentially make them feel like comic-book super-villains whose parents got killed in an alley or something. It came off to me as a style that could be appropriate for mature audiences, and that’s not a bad thing.

Nevertheless, it felt like too drastic a change in tone at the time, one that would be really hard to pull off successfully, so I tried to tone it down a little by experimenting with the smooth-shaded-with-textures look afterwards:

This one didn’t feel as serious as the noir look, but it also wasn’t clear whether it was successfully contributing to the tone I was looking for. The use of placeholder assets and clash of artstyles was preventing me from making a proper assessment. On top of that, I ran out of ideas for other styles I could potentially execute on, so unfortunately, for the time being, I had finally reached a dead-end and had to halt progress on this.

By the time I was done with my artstyle tests, John had completed a first pass of the 3D treehouse mesh, so I jumped into his Unity scene and started experimenting with real-time lighting, various toon shaders and colorful materials:

At the same time, I started prototyping the exterior of the treehouse with placeholder assets from the Unity asset store. The goal was to find out what the exterior should feel like. Should the treehouse be floating in space? Perhaps on an asteroid? Or should it be grounded and hidden in the middle of a forested valley?

Although it sounded kinda cool, the valley idea didn’t pan out. After spending some time in this version of the treehouse in VR, I felt like I was playing a hermit, hiding away from society in some deep forested area. Iron Man is more social than that. Plus, the outside of the treehouse can’t possibly be this static and dull. Ideally, the fun stuff happening in the treehouse should somehow overflow into the outside of the treehouse as well.

The problem, of course, was that I wanted to give the player a chance to explore and learn about the world that lives outside the treehouse, while at the same time remaining reluctant about the idea of adding VR movement in Tablecraft (e.g. teleportation), for reasons that I will explain in future devlogs. Luckily, in the end it all worked out, as it now seems like the answer may have been more straightforward than I thought. If you can’t teleport yourself all the way to the mountain, teleport the mountain all the way to you:

More details on that in the 2nd part of this devlog.

Anyway, at this point I also realized that something felt wrong about the treehouse structure itself. It felt too small to be an Iron Man kind of lab. So I duplicated the entire treehouse and added a new floor. There, I added an ugly giant telescope prototype, which I later replaced with a much better one that John 3D modeled:

After that, when I stepped into the treehouse in VR, things no longer felt as out of place as they did before. The treehouse was now starting to feel more like a solarpunk facility with multiple floors, similar to Iron Man‘s cliff-side mansion-lab, as opposed to a tiny shack on a tree. Perhaps Tablecraft’s fiction implies that the chemistry/physics work is done in the balcony area, whereas the astronomy work is done in the upper telescope floor. Likewise, perhaps there are other floors where other kinds of work is done.

This is where we stopped. It’s still a long way from being finished, but at least the core idea is there. Since then, John has been working on light-mapping the whole thing by hand, as he will explain in the devlog that kickstarted this prequel. Light-mapping by hand was a necessary evil for reasons that will be explained in his devlog.

As for the artstyle, we still need to inject more personality in it so that it doesn’t look as generic as it currently does. I definitely want to make sure we don’t fall under the Childish Danger Zone. Tablecraft is not meant to be childish. Playful, yes, but not childish. By developing a game that features educational content (which, as you might guess, could come off to some players as being potentially boring and condescending) and then making it all worse by having it look childish, we’d be shooting ourselves in the foot. There are certain aspects of the artstyle and audio that still feel very childish, so we’ll be tweaking those in the months to come.

Keep an eye out for Part 2.

Exploring SDFs in Tablecraft: Part 1

Hello there, Rahul here. For those who read my first post, it’s good to see you again.

Recently, I’ve been experimenting with Signed Distance Fields (or SDFs as it’s known to its friends), and that work’s basically culminated in this first devlog. Here I’ll go over what SDFs are, why we’re considering using them, and how they work. Part 2 is mostly going to be our conclusions and whether or not SDFs were used in Tablecraft. And secretly I’m hoping we can pull a hobbit and do a Part 3 (don’t tell anyone).

Introduction

The story of SDFs and Tablecraft actually begins before I joined the team. You might have noticed how Tablecraft has been changing these past couple months. Rafael and John have been giving the game an entire visual overhaul, exploring what Tablecraft should look and feel like. In this exploration process, Rafael quickly setup the scene with multiple real-time lights so that he could iterate on the look as fast as possible.

Tablecraft Screenshot 05

Notice the pink-ish flat highlights on the trees and telescope. That’s done with real-time lighting and a toon shader.

But a bunch of real-time lights can’t be a permanent solution, because of all the draw calls they incur. It’s completely unsuited for platforms like the Oculus Quest. The standard answer to this situation is to just use light-mapping. But Unity’s light-mapping algorithm can’t do hard-edged highlights like the ones in the picture above, or at least it couldn’t back when Guillaume and Rafael tested it. So this left the team with two choices: either build some kind of custom light-mapping solution or paint every 3D object manually with lighting on it.

It was decided that painting 3D objects was likely to be the fastest option. So John started painting all of his 3D models to match the first draft of colors and lighting Rafael had put together using real-time lights.

Fast forward a little bit, and I’m now working with the team to get Tablecraft onto the Quest. I see John post his textures, and I read his lament that even massive 2048×2048 textures aren’t enough to render good edges in VR. People can put their head as close to a texture as they like in VR, so it really takes a lot to keep the fidelity that real-time lighting had.

In response to this problem, I suggested SDFs and found it’s really hard to explain to others, mostly because it seems so magical.

SDFs Really Aren’t Magic

SDFs are a texture. You use these textures with a shader that can interpret them. Those two will basically let you render shapes with sharp edges no matter how far you zoom in, all while using very small textures (I’m talking 32×32 and 64×64 textures here).

This technique is really applicable for Tablecraft since our scenes are all just arbitrary shapes of flat color lying on top of each other. In my tests I was able to recreate the 2048×2048 texture on John’s model using one 256×256 texture. And here’s a comparison of how they hold up:

Can you guess which one’s the SDF?

But SDFs really aren’t magic though.

How do they work?

Bilinear Filtering

At the core of SDFs is bilinear filtering. Whenever you want to render an image, you rarely have the opportunity to draw it by just assigning each pixel in the screen a color from a pixel in the image. Images are scaled, rotated, and generally just moved around. They don’t necessarily align with your screen’s grid of pixels. So you have to do some math.

Imagine that your texture isn’t a grid of square pixels, but is instead a set of points in space with colors attached to them. So each pixel actually tells us the color of a certain coordinate in space. When you want to draw that texture, you look at where each pixel in your monitor lands among that set of points. Most likely, your screen pixels are landing somewhere between the actual points of your image.

The simplest way to deal with these situations is to find which texture point is closest to your screen pixel, and assign that texture color to the pixel. That is the nearest neighbor method. Bilinear filtering takes the four texture points surrounding the position of your screen pixel, and takes a weighted average of them. Here’s a modified image from wikipedia explaining the process in 1D then 2D:Interpolation compressed

The yellow and green lines represent the points in the texture. The black line is the position of the screen pixel. The heights of the lines represent their color. Bilinear interpolation linearly interpolates the colors across 2 dimensions.

This is why scaled up images often look so blurry. It’s because bilinear filtering smoothly interpolates between nearby pixels, transforming two adjacent colors into smooth gradients.

It probably seems odd that a technique for rendering crisp edges relies on this blurring effect. But the technique works because SDFs don’t store information about what color a pixel should be, like regular textures do. SDFs instead store information about distance and take advantage of bilinear filtering to generate pixel-perfect distance values. Which means now it’s time to explain what exactly I mean by distance.

The Distance in SDF

Let’s say you want to draw a circle: a circle that can be scaled up by a billion times without any artifacts. You might say, “well a pixel-perfect image of a circle is really just a collection of screen pixels that are less than a certain radius away from the center.” So you might program a shader that takes advantage of this. It calculates each pixel’s distance from the center and colors all the points inside the circle red. Now you have a red circle that can be scaled up to any size because it’ll always show the sharpest line your monitor can render.

But maybe you want to draw other pixel-perfect shapes some day. And you know that some shapes can get really complicated, and it’ll be impractical to calculate whether or not each pixel is inside the shape. So let’s assume that even our simple circle is like one of those complicated shapes. What could you do to still use this distance approach?

How about storing distance in a texture? The nice thing about distance is that it scales linearly with distance. Which means bilinear filtering’s linear interpolation is a good way to take a limited amount of pre-calculated distance information and accurately scale it up to whatever resolutions we need. Using textures also points to a way to generalize SDFs to not just be shapes pasted onto the 2D plane of the monitor. You can do these calculations per pixel using things like UV or world position values instead of screen coordinates.

Now the question is how do you encode distance information in a texture?

The Signed in SDF

Imagine if we weren’t dealing with textures. Our main goal is to tell pixels whether or not they are inside an arbitrary shape. We can then color the pixels that are inside the shape. It’d be good if our way of encoding distance made it easy to retrieve this information. So what if we just store distance from the edge of the shape? A positive distance means we are inside the shape, and a negative distance means we are outside the shape. Now all we need to do is check if the distance is less than 0, and we know a pixel is outside the shape!

That’s one bit of nuance about this technique that is easy to overlook. All we really want is a binary result of inside and outside, but that information by itself can’t be interpolated well when scaled up to higher resolutions. That’s why we are storing distance along with the sign.

Since we’re dealing with a texture, we offset our distance information by adding 0.5 and clamping the result between 0 and 1. This information can then be stored as a single channel texture where black represents 0 and white represents 1.

Now you can enjoy some really dope circles.

Now you know SDFs! (if you didn’t already)

And you can probably see why we’re excited about it. It’s a great way to render our lighting. The only problem is that it has some limitations, and that’s what we’ve been experimenting with to see how feasible they are in the game.

Our Experiments

In our performance tests, we found SDFs to be more expensive than just drawing a regular texture. We also found reading multiple textures for multiple SDFs to be more expensive than a single texture with an SDF in each channel. This is more or less as expected, and it’s background context for our experimentation.

UV Mapping

One thing we had to make sure is that SDFs work well with UV mapped textures. Since SDFs store information outside the edge of a shape, that information can spill outside the UV mapped region for that triangle. So that information is liable to be cut off.

I ran a simple test that assuaged my fears of any possible problems. The SDFs version of a UV mapped texture worked just fine. Thankfully bilinear filtering has our back and we always have access to the proper interpolated distance values.

Limited Color Palette

I’m sorry. When I showed the comparison between John’s textures and my SDF recreation up there, I was hiding some flaws. I’m sorry to break your trust like this, but I hope I can build it back.

This is John’s Texture:

This is my SDF recreation:

My recreation was limited to only 5 colors, which is why you see some artifacts on the left. SDFs aren’t inherently limited by colors, but some performance considerations get in the way (as usual).

SDFs draw shapes accurately by figuring out which pixels fall inside the shape. The standard thing to do is just paint the pixels inside the shape a particular color. But you can do a lot more than that. You could render a regular old texture in that shape or paint a gradient there. The problem is how do we tell the shader what to do at each pixel?

The simplest approach is to associate a single action (coloring, drawing texture, etc.) with an SDF source. So let’s say you have 4 SDF textures drawing 4 shapes, each source can then be told to paint with a single color. If needed, you can create shader variants where one or more of the SDFs is painted with a gradient or whatever the team needs.

Now we don’t want each pixel of the game doing multiple texture reads, because that will hurt performance. So the best thing, performance wise, is to limit the number of textures to just 1. That one texture has 4 channels which can be used to store 4 SDFs. So you paint your model a base color and then paint 4 shapes of different colors on top to get a total of 5 colors per object in the game.

Mapping Alpha Values to Colors

I’ve thought about a technique to kind of bypass this color limitation. You can use one channel, let’s say the alpha channel, to encode color information. For example, you can assign a color to an alpha of 0, an alpha of 0.5, and an alpha of 1. Then let’s say you’re storing 3 SDF shapes in your red channel. In your alpha channel you can paint the region occupied by 1 shape with 0, another shape with 0.5, and the third shape with 1. You can now draw three shapes in three different colors using 2 channels. That’s pretty good.

To store these alpha-value to color mappings, we can just hard-code them into the shader, assign them as shader properties, or store them in another texture.

Whatever approach we choose, it has to be cheaper than just reading more textures. So the thing that determines if this technique is worth it is how many more colors you’re able to render with it. For an SDF to render separate shapes, they need to have a bit of a gap between them. So how many separate shapes you can pack into a texture channel is frankly limited. So I think the gains here will generally be limited unless your artist makes art with SDFs in mind.

For us, this approach seems fairly impractical for now. We can just limit our color palette and read more textures as needed. We’ll know better once we decide exactly what art style we want to use. And if rendering turns out to be a huge bottleneck, we can start looking at this again.

Texture Atlas-ing

There’s another thing we could try which is basically creating a texture atlas where multiple SDF textures are packed in a grid arrangement, into a bigger texture. This is potentially much better than mapping colors to a texture channel, but it does have drawbacks. SDFs can run into floating-point precision problems that make things look wonky when you zoom in too far or make your object too big. This technique will make us more susceptible to those problems, especially on our massive treehouse meshes, but it’s still worth exploring.

Gradients

The default thing for SDFs is to paint solid colors. To give our artists more flexibility, we also experimented with the options for rendering gradients.

UV-Mapped Gradients

This is the easiest and the most efficient way to get gradients. Basically a single gradient is applied to the entire UV space, and you can define parameters for which UV position the gradient starts, the angle at which the gradient transition happens, and how stretched out the gradient is.

This technique is limited because it takes one texture channel and applies a single gradient to all the SDF shapes defined in it. So if you want multiple shapes where each shape has different gradient colors, or has a different start position and angle or anything, you’ll have to use a new texture channel.

You can get around these limitations by being crafty. You can use UV mapping to get the gradient to be where you want in a polygon. You can also use this technique to get multiple colors on one SDF!

But all of these techniques require extra, potentially finicky and tedious, work from the artist. Also being able to use UV mapping to mess with the gradient is a blessing and a curse, it also means you have to keep the gradient in mind when you want to make a UV mapped model and texture.

Texture-Mapped Gradients

Another approach to gradients is to create a texture that defines the value of the gradient at any specific point, and use that to paint the gradient. This requires one additional texture read to pull off, or we can sacrifice one of the SDF channels to store this data. This is similar to the approach to get more colors by mapping 1 channel alpha values to colors.

Like with the approach to having multiple colors in a single SDF texture, you can store the gradient color information in another texture to get very complex gradients. Basically the sky’s the limit here.

Here comes the sun

There is one thing you have to watch out for with this approach: UV mapping. The artist needs to ensure the UV mapping doesn’t cut off the gradient in a way that doesn’t make sense.

That’s about all for gradients. We aren’t actually sure if we’ll need them in the game, or how often we’ll use them. Once we know that, we’ll know if there’s anything we want to develop further here.

Conclusion

So that’s basically the gist of what I’ve been doing for a while. I implemented some SDF shaders and showed what we can do with it. Now it’s up to the art team to figure out what kind of an art style they want and what limitations they’re willing to work with.

Stay tuned for Part 2!