dtgreene: This amounts to a fair number of lines of OpenGL, and I believe the situation in Vulkan is even worse in terms of the number of lines of code that needs to be written.
EverNightX: OK. Using a compute shader which isn't really intended for drawing rather than what actually is intended for drawing seems a questionable choice to me but I won't try and convince you otherwise.
I'll just point out that if you really wanted to reduce the amount of code you write you may wish to look into using a library like
https://www.libsdl.org/ which would give you a lot of power and compatibility w/ much less effort.
There is one particular characteristic of compute shaders that happens to fit the problem of drawing a tilemap on the screen.
When writing a compute shader, you define what's known as a workgroup, and in particular specify the workgroup size. This size has 3 dimensions, though it's possible for some of them to be 1, so I can, say, effectively have my workgroups be 2D. In particular, I can have each workgroup be the size of one tile.
Now, in the shader code itself, I have three integer vector variables that I can use:
* The global invocation ID, which here corresponds to a single pixel of the texture I'm rendering to
* The workgroup ID, which corresponds to the particular tile on the screen, and I can use it to index into what I call the nametable (terminology taken from the NES development community), to determine which tile to draw here (note that each workgroup draws one tile)
* The local invocation ID, which corresponds to the pixel chosen from the particular tile being drawn.
* Also, note that these IDs are all integers, which feels natural when you're working with tilemaps and pixel art.
It just happens that this maps *so well* to the problem of drawing a tilemap.
Doing this rendering with a vertex and fragment shader is possible, but I would have to:
* Do some conversion from floating point normalized coordinates to the integer coordinates I want (though there appears to at least get the pixel coordinates in the fragment shader)
* Do some math to convert it to the values corresponding to the workgroup and local invocation IDs. (This involves doing an integer division and using both the quotient and remainder; unfortunately GLSL doesn't seem to have a way to get both in one operation.)
* Still need that vertex shader as well, though at least that shader would be trivial in this instance.
As for SDL, I actually do use it, for creating the window, getting the OpenGL context to use, and handle input. Thing is, I could use it for the graphics drawing, but I wouldn't get hardware acceleration for the calculations (plus I find it fun to figure out how to do this stuff on the GPU). The way this would work would be something like this:
* What the compute shader is currently doing would be done in software. (It's possible to lock the surface/texture for this purpose, and write to the pixels directly.)
* Have SDL handle upscaling the image.