It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
If once upon a time you fell in love with NES, SNES and PS1-era JRPGs, then this title is going to be something right up your alley! 8-Bit Adventures 2 is now available on GOG, along with Soundtrack, or in a Bundle of both titles – all of them with a -10% launch discount. The offer ends on February 7, 2 PM UTC.

8-Bit Adventures 2 is a retro style turn-based RPG with an earnest, engaging storyline and relatable, easy-to-love pixel characters. Through the gameplay, you’ll get to experience strategic turn-based battles and deep party customisation, as well as bizarre monsters and a large, fantastical world filled with people full of personality. All of that, in a memory-inducing, vibrant 8-bit inspired visuals, sprinkles with an unforgettable Soundtrack!
avatar
DarkBattler: By the way, if anyone is hesitant to buy the game, there is a demo here:
https://www.gog.com/en/game/8bit_adventures_2_demo

I wonder why it wasn't mentioned in the announcement post.
avatar
BreOl72: Most probably because it is featured on the game's store page, and everyone who is interested enough to check out the game, will inevitably stumble upon it?
This, my fellow GOG user, is the kind of very good answer that has the power to make the one getting it feel incredibly stupid.
I'm going to lie down for a few hours, now.
avatar
BreOl72: Most probably because it is featured on the game's store page, and everyone who is interested enough to check out the game, will inevitably stumble upon it?
avatar
DarkBattler: This, my fellow GOG user, is the kind of very good answer that has the power to make the one getting it feel incredibly stupid.
I'm going to lie down for a few hours, now.
My response really wasn't intended to make you feel stupid.
I promise.
avatar
BreOl72: My response really wasn't intended to make you feel stupid.
I promise.
I don't believe you.
No one wields such powers, with such mastery, by accident.
Not that it really matters. But the whole 8-bit, 16-bit thing has been confused historically. It was a reference to the console's CPU not its graphics. It really did not have to do with tile size of color bit depth.

Even the 16-bit SNES maxed out at 256 simultaneous colors (which is 8-bit graphics).

Cool there is a demo for this.
avatar
dtgreene: One thing I'm wondering is whether the game engine works anything like how the old consoles work, or whether it's a modern engine that just draws pairs of triangles and textures them.
It says it uses DirectX 9. So it's very likely the engine is drawing quads whether they are aware of it or not.
What language and graphics API are you using?
Post edited February 02, 2023 by EverNightX
avatar
dtgreene: One thing I'm wondering is whether the game engine works anything like how the old consoles work, or whether it's a modern engine that just draws pairs of triangles and textures them.
avatar
EverNightX: It says it uses DirectX 9. So it's very likely the engine is drawing quads whether they are aware of it or not.
What language and graphics API are you using?
Currently (though I'm going to try to keep the graphics code simple enough to re-implement if needed), I'm using OpenGL, but the one unusual thing I'm doing is that I'm not using the rendering pipeline. Instead, I'm creating a texture, using a compute shader to draw the picture into the texture at a fixed resolution (the compute shader, and specifically the way workgroups work in compute shaders, just happens to map well into the problem of rendering a tilemap image), attaching the texture to a framebuffer object, then blitting it to the screen.

Worth noting that, on a modern GPU, it might actually be faster to draw a full screen quad rather than blitting to the screen, mainly because modern GPUs are highly optimized triangle renderers.

Currently targeting OpenGL 4.5 mainly because of direct state access, which makes things easier to code, but unless the game ends up running unreasonably slow on a low spec machine using software rendering (llvmpipe) I probably don't need to switch to an older, more compatible version (or to just using SDL directly, which is probably the way to go for pure software rendering for this project).

Edit: I'm writing the graphics engine in C (yes, plain C, not C++), but I'm planning on making it accessible from Python, which is the language I want to use for my game logic.
Post edited February 02, 2023 by dtgreene
avatar
dtgreene: the one unusual thing I'm doing is that I'm not using the rendering pipeline. Instead, I'm creating a texture, using a compute shader to draw the picture into the texture at a fixed resolution
OK, well that's not an approach I would have thought to do. Long ago I did render tiles using a 2D API. I can't say I've thought thru how best to do it in a modern API. But my first thought would be to construct the geometry for the entire map programmatically as a single mesh. Ideally you could place all the tile images on a single texture for that map (or that layer of the map)

So if I had a map 128x200 tiles big I'd call a function that took that width, height, and an array of indices that mapped an image to each tile. This function could return an array of vertices that were a quad for each tile and that included the texture coordinates as well. Since this would be done on loading the map there really would not be much in the way of calculations needed during rendering.

At that point I'm not sure what advantage there would be in not using the normal vertex/fragment shader stages to draw. Because as you say they are going to be very efficient at rendering with that pattern. But you may have reasons for doing things as you are.

And yeah I don't blame you using C. That would be my preference as well.
Post edited February 02, 2023 by EverNightX
avatar
EverNightX: It says it uses DirectX 9. So it's very likely the engine is drawing quads whether they are aware of it or not.
What language and graphics API are you using?
avatar
dtgreene: Currently (though I'm going to try to keep the graphics code simple enough to re-implement if needed), I'm using OpenGL, but the one unusual thing I'm doing is that I'm not using the rendering pipeline. Instead, I'm creating a texture, using a compute shader to draw the picture into the texture at a fixed resolution (the compute shader, and specifically the way workgroups work in compute shaders, just happens to map well into the problem of rendering a tilemap image), attaching the texture to a framebuffer object, then blitting it to the screen.
Will this cause issues with different screen resolutions and scaling?
avatar
dtgreene: the one unusual thing I'm doing is that I'm not using the rendering pipeline. Instead, I'm creating a texture, using a compute shader to draw the picture into the texture at a fixed resolution
avatar
EverNightX: OK, well that's not an approach I would have thought to do. Long ago I did render tiles using a 2D API. I can't say I've thought thru how best to do it in a modern API. But my first thought would be to construct the geometry for the entire map programmatically as a single mesh. Ideally you could place all the tile images on a single texture for that map (or that layer of the map)

So if I had a map 128x200 tiles big I'd call a function that took that width, height, and an array of indices that mapped an image to each tile. This function could return an array of vertices that were a quad for each tile and that included the texture coordinates as well. Since this would be done on loading the map there really would not be much in the way of calculations needed during rendering.

At that point I'm not sure what advantage there would be in not using the normal vertex/fragment shader stages to draw. Because as you say they are going to be very efficient at rendering with that pattern. But you may have reasons for doing things as you are.

And yeah I don't blame you using C. That would be my preference as well.
The advantage in not using the normal vertex/fragment shader approach is that I don't have to set up the shaders. In other words, it's simpler to program.

Also, I'm wondering if blitting would be faster than going through the 3D pipeline if using a software renderer. (Might be interesting to test this at some point.)

(I've heard that at least one OpenGL driver will actually translate a blit function call into drawing a triangle pair and applying the texture.)

avatar
dtgreene: Currently (though I'm going to try to keep the graphics code simple enough to re-implement if needed), I'm using OpenGL, but the one unusual thing I'm doing is that I'm not using the rendering pipeline. Instead, I'm creating a texture, using a compute shader to draw the picture into the texture at a fixed resolution (the compute shader, and specifically the way workgroups work in compute shaders, just happens to map well into the problem of rendering a tilemap image), attaching the texture to a framebuffer object, then blitting it to the screen.
avatar
paladin181: Will this cause issues with different screen resolutions and scaling?
When you blit in OpenGL:
* You can change the size (on both axes independently, which means I have to do extra work to maintain the aspect ration). As a result, it will be scaled to the needed resolution. I believe you can specify a filter, like GL_NEAREST (good for pixel art) or GL_LINEAR (for smoother graphics styles).
* You can even flip the image, which is useful because OpenGL's coordinate system is different from what I'd expect for computer graphics (and I believe is different from what some other graphics APIs use).

(The current tileset being used is actually an old BIOS font that I converted via script. Saves me having to actually do any art.)
Post edited February 02, 2023 by dtgreene
avatar
DarkBattler: I played for about five hours and the game looks really good, but I had to refrain from continuing because I have another game to finish first.
From what I've seen, the gameplay is very similar to the first one, but more refined, and the menus and graphics are beautiful, but I can't talk too much about all that for now, since I didn't play for a very long time.

However, I've listened to the soundtrack several times now and it's amazing!
It's about three and a half times longer than the first game's soundtrack, and the composer has done a stellar job on every single track.

Honestly, I'm impressed with what I've seen so far, and I can't wait to dive into it for real.

By the way, if anyone is hesitant to buy the game, there is a demo here:
https://www.gog.com/en/game/8bit_adventures_2_demo

I wonder why it wasn't mentioned in the announcement post.
Hey Darkbattler! Great to see you again =D Thank you SO much for buying and playing the game and soundtrack. Sebastian specifically included the FLAC files because of your request for high quality audio last time we spoke =)

I'm thrilled with all your kind words, so thanks for letting me (and other players) know!

avatar
EverNightX: Not that it really matters. But the whole 8-bit, 16-bit thing has been confused historically. It was a reference to the console's CPU not its graphics. It really did not have to do with tile size of color bit depth.

Even the 16-bit SNES maxed out at 256 simultaneous colors (which is 8-bit graphics).

Cool there is a demo for this.
Very good point EverNightX! I think nowadays it's more evocative of an era, and an audiovisual style (e.g. sprites drawn at 16x16, a limited colour palette,etc.). But you're absolutely right on the technical origins (I always love that tidbit about the SNES' colour palette). Thanks for having a look at the game! =)
avatar
dtgreene: The advantage in not using the normal vertex/fragment shader approach is that I don't have to set up the shaders. In other words, it's simpler to program.
I believe you are mistaken. The shaders for that would be trivial.
avatar
dtgreene: The advantage in not using the normal vertex/fragment shader approach is that I don't have to set up the shaders. In other words, it's simpler to program.
avatar
EverNightX: I believe you are mistaken. The shaders for that would be trivial.
The shaders would be trivial, but there's still code that has to be written around the shaders. In particuar, the code needs to:
* Load the shader source code into memory (perhaps from disk, or they could be compiled into the bnary)
* Compile each shader stage
* Link the stages together to form a program
* Select the program as the one to be used
* Set up the VAO and VBO, as well as uniforms
* Then actually run the draw command

Granted, much of this I do need to do for the compute shader, but there I don't need the VAO/VBO, and I only need one shader stage. Also, I can make the VAO and VBO empty, instead computing the vertex coordinates on the fly in the shader (or just hardcoding them in; there's only 3 or 4 of them, anyway), which does simplify things a bit.

This amounts to a fair number of lines of OpenGL, and I believe the situation in Vulkan is even worse in terms of the number of lines of code that needs to be written.
avatar
dtgreene: This amounts to a fair number of lines of OpenGL, and I believe the situation in Vulkan is even worse in terms of the number of lines of code that needs to be written.
OK. Using a compute shader which isn't really intended for drawing rather than what actually is intended for drawing seems a questionable choice to me but I won't try and convince you otherwise.

I'll just point out that if you really wanted to reduce the amount of code you write you may wish to look into using a library like https://www.libsdl.org/ which would give you a lot of power and compatibility w/ much less effort.
avatar
dtgreene: This amounts to a fair number of lines of OpenGL, and I believe the situation in Vulkan is even worse in terms of the number of lines of code that needs to be written.
avatar
EverNightX: OK. Using a compute shader which isn't really intended for drawing rather than what actually is intended for drawing seems a questionable choice to me but I won't try and convince you otherwise.

I'll just point out that if you really wanted to reduce the amount of code you write you may wish to look into using a library like https://www.libsdl.org/ which would give you a lot of power and compatibility w/ much less effort.
There is one particular characteristic of compute shaders that happens to fit the problem of drawing a tilemap on the screen.

When writing a compute shader, you define what's known as a workgroup, and in particular specify the workgroup size. This size has 3 dimensions, though it's possible for some of them to be 1, so I can, say, effectively have my workgroups be 2D. In particular, I can have each workgroup be the size of one tile.

Now, in the shader code itself, I have three integer vector variables that I can use:
* The global invocation ID, which here corresponds to a single pixel of the texture I'm rendering to
* The workgroup ID, which corresponds to the particular tile on the screen, and I can use it to index into what I call the nametable (terminology taken from the NES development community), to determine which tile to draw here (note that each workgroup draws one tile)
* The local invocation ID, which corresponds to the pixel chosen from the particular tile being drawn.
* Also, note that these IDs are all integers, which feels natural when you're working with tilemaps and pixel art.

It just happens that this maps *so well* to the problem of drawing a tilemap.

Doing this rendering with a vertex and fragment shader is possible, but I would have to:
* Do some conversion from floating point normalized coordinates to the integer coordinates I want (though there appears to at least get the pixel coordinates in the fragment shader)
* Do some math to convert it to the values corresponding to the workgroup and local invocation IDs. (This involves doing an integer division and using both the quotient and remainder; unfortunately GLSL doesn't seem to have a way to get both in one operation.)
* Still need that vertex shader as well, though at least that shader would be trivial in this instance.

As for SDL, I actually do use it, for creating the window, getting the OpenGL context to use, and handle input. Thing is, I could use it for the graphics drawing, but I wouldn't get hardware acceleration for the calculations (plus I find it fun to figure out how to do this stuff on the GPU). The way this would work would be something like this:
* What the compute shader is currently doing would be done in software. (It's possible to lock the surface/texture for this purpose, and write to the pixels directly.)
* Have SDL handle upscaling the image.
avatar
dtgreene: ...
OK. Well I hope it goes well.