It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
I have tried Unreal Engine 5, and it was suprisingly bad compared to Unreal Engine 4 performance-wise. With the exact same environment (with a lot of foliage) I get 10-13 fps in UE5, while in UE4 I get 60+ fps. This was quite a dissapointment, but it is early access to be fair. I did notice however they added loads of features that made the engine easier to use. They kept Linux support so thats a plus.
avatar
kohlrak: Unfortunately, that's not specific to python. And now in C++ you can do the same thing to a certain degree, with "auto." Coming to a C# near you, for sure.
avatar
dtgreene: There's a difference.

Python:
x = 2
x = "2"

no error here, until you try to do print(x + 1)

C++:
auto x = 2;
x = "2";

The second assignment is an error, as you can't assign a pointer to character to an integer variable without a cast. (I believe you can in plain C, but expect a compiler warning if you try this. Also, the code I wrote, I believe, works the same in C (at least older standards with "implicit int"), even though the meaning of "auto" is completely different.)

For comparison:
In Perl and JavaScript:
x = 2;
x = "2";

No error here, and even x + 1 is not an error (the result is "3" in perl or "21" in JavaScript).
Yeah, that's the primary nuance. The problem with auto (and languages like python and such) is you don't know how it's handled, what's underneath, how large the storage has to be, etc, especially if you need to dynamically allocate it and such. Alot of my code, especially since I do alot of pointer heavy things, end up with type ambiguity, anyway. Another separation between the ambiguous langauges (which really annoys me when i use bash or php as they're the ones i use more often with this mentality) is handling of certian operators. PHP uses . to merge strings, and everyting is automatically a string, anyway, so that's not as big of a deal, but bash... It just ends up being easier to invoke expr by the end of it all.

For the game I'm thinking of writing, I may just write my own tilemap renderer (using an approach similar to what the NES did, but with perhaps more colors and such), and design the game to not need things like pathfinding or physics (or implement them in simplified form, like only having to route through squares on a grid rather than through continuous space).
My toy OS had a really nifty feature that was unique, until i found out it actually wasn't. I had this idea where you would store images as text and then defined a palette for replacing this text. In theory, you could use the exact same text to create animations for certain types of sprites for thinsg like fire, flashing lights, etc. Then i discovered PixMap and realized i only barely changed the format with independent invention. I have a feeling tihs format might be right up your alley (pixmap, not my format, since libraries like SDL2 support it out of the box).

EDIT: Given your focus on RPGs, unless you're going to surprise me here i don't htink you need pathfinding. Also, despite what i say about engines, some versions of RPG maker are surprisingly fast. Still, we know what the cheaper option is.
Post edited June 01, 2021 by kohlrak
avatar
Orkhepaj: This engine looks so good.

It is becoming hard to differentiate real time render from real life.
...
the main diffrerences are:

real life is way, way cheaper, powered by potatoes, has horrible resolution and no ray tracing

though i kinda got used to real life
avatar
kohlrak: EDIT: Given your focus on RPGs, unless you're going to surprise me here i don't htink you need pathfinding. Also, despite what i say about engines, some versions of RPG maker are surprisingly fast. Still, we know what the cheaper option is.
Now, the question is: Are they still fast under LIBGL_ALWAYS_SOFTWARE=1? (Or, alternatively, on a virtual machine that has no access to the GPU?)

Also, is the Raspberry Pi supported as a target platform?

(I actually don't want to use RPG Maker. There's the proprietary license of the engine, limited OS support I believe, and the RPG I want to make differs from RPG conventions in some rather significant ways to the point where it might be easier to start from scratch rather than add scripts to RPG maker's engine.)
avatar
kohlrak: EDIT: Given your focus on RPGs, unless you're going to surprise me here i don't htink you need pathfinding. Also, despite what i say about engines, some versions of RPG maker are surprisingly fast. Still, we know what the cheaper option is.
avatar
dtgreene: Now, the question is: Are they still fast under LIBGL_ALWAYS_SOFTWARE=1? (Or, alternatively, on a virtual machine that has no access to the GPU?)

Also, is the Raspberry Pi supported as a target platform?

(I actually don't want to use RPG Maker. There's the proprietary license of the engine, limited OS support I believe, and the RPG I want to make differs from RPG conventions in some rather significant ways to the point where it might be easier to start from scratch rather than add scripts to RPG maker's engine.)
RIght, i forgot you're a more libre type. I haven't tested this, yet, but it's on my todo list. I was thinking about using it to port some games i have to andorid.
Post edited June 01, 2021 by kohlrak
avatar
Not_you: I have tried Unreal Engine 5, and it was suprisingly bad compared to Unreal Engine 4 performance-wise. With the exact same environment (with a lot of foliage) I get 10-13 fps in UE5, while in UE4 I get 60+ fps. This was quite a dissapointment, but it is early access to be fair. I did notice however they added loads of features that made the engine easier to use. They kept Linux support so thats a plus.
Keep in mind it's probably optimized only for latest gen GPUs - not sure what hardware you've tested it on. And in any case, it's just coming out. UE4 was a good iteration of UE to be honest, if we remember the horror shows that UE2 and UE3 turned out to be, so who knows about UE5... Only time will tell.
avatar
LiefLayer: In the meantime just read this (from the last version of godot c#)
That's not last version of Godot. I just downloaded Godot with Mono and there's no such warning.
C# support has been since 2017, as I already said. Last year MS sponsored C# overhaul which resulted C# being faster in some operations than the built in GDScript. You might have gotten that warning if you downoaded nightly build while that overhaul was in progress last year. If you just downloaded stable version from the official site you wouldn't have gotten that warning nor any other issues.

avatar
kohlrak: That's a good sign. When using C++ do you have to use things like "extern "C"" or anything like that? That'd be another dead giveaway.
I don't use C++ with Godot so I don't know. There are extensions for C++ though.
Here's something on the topic that might help:
https://gamefromscratch.com/godot-with-c/
https://github.com/godotengine/gdnative-demos

avatar
kohlrak: The game on my todo list to make is going to require i do all that myself, 'cause these engines won't have the features I actually need. Dealing with the path finding is one of those reasons i'm sitting here playing tames and talking on the forums rather than completing the project, so i get where you're coming from. I have a few ideas that could work.
I regularly get away from the project and enjoy life so I wouldn't stress out too much. This means playing games, hangin out in some forums and daily walk in nature. I also use tricks and make compromises to make my life easier. For example, I have all civilian units pass right through each other so they wouldn't have to re-calculate every time someone crosses their path or when they come to close proximity of one another.

avatar
kohlrak: I've decided at this post that it's not worth persuing with him. He's either trollng or has cognitive bias to extreme degrees (notice the fanboyism and the insituation of me knowing little to nothing?). My guess is he probably won't try it, 'cause he's a little too invested in unity.

Since you have more experience trying multiple enginees, which ones do and don't suffer from gimbal lock? I imagine Unreal does, given it seems to focus on FPS games, but what about Godot and Unity? After my current project I wouldn't mind getting into flight sims or space sims, as i have some unique ideas for those.
My goal is not to force Godot on others. I have used it myself only for couple of years now. By any means it's not a perfect engine. At least not for everyone.

I'm not an expert on game engines. I have tried some but flight & space simulations aren't my cup of tea really.

While general purpose engines can be useful for most developers, I think engines that target niche genres could serve that market better. It's like with Swiss army knife, it can do all sort of things but the question is - how well? Then again, in many situations you don't need super special tools for everything and just need the job done. Other approach would be to be able to turn off features that aren't being used, like Godot for example. Godot can handle 3D pretty well but it depends on the scale. Godot has it's limits. You could use tricks to bypass these limits but would you want to? Another thing is that Godot isn't really a AAA engine. Then again, neither is Unity. For 3D (especially with large scale) I'd use engine that is well optimised for 3D, like Unreal. Only thing with Unreal is that people say v5 isn't as good and optimised yet as v4. Apparently there are huge performance differences even when loading the exact same scenes. Which means I'd stick with Unreal 4 for now. It also depends how long your current project takes, so maybe once you finish with that, Unreal 5 could be mature by then.

Another thing to consider is what are exactly your ideas and your level of coding skills (that seem to bypass mine by far). You could get away even with 2D by faking 3D like they did it in the old days. Or if you're comfortable, you could use some libraries, frameworks or create your own engine. There are many things to consider when choosing a game engine so I can only make guesses. Here's a list of different game engines.

https://gamefromscratch.com/game-engine-guides/
avatar
Not_you: I have tried Unreal Engine 5, and it was suprisingly bad compared to Unreal Engine 4 performance-wise. With the exact same environment (with a lot of foliage) I get 10-13 fps in UE5, while in UE4 I get 60+ fps. This was quite a dissapointment, but it is early access to be fair. I did notice however they added loads of features that made the engine easier to use. They kept Linux support so thats a plus.
avatar
WinterSnowfall: Keep in mind it's probably optimized only for latest gen GPUs - not sure what hardware you've tested it on. And in any case, it's just coming out. UE4 was a good iteration of UE to be honest, if we remember the horror shows that UE2 and UE3 turned out to be, so who knows about UE5... Only time will tell.
I might prefer to optimize my game for llvmpipe, or for the Raspberry Pi, which would be a good way to ensure that lower end modern systems would be able to handle it, instead of optimizing for the latest and greatest.

avatar
ConanTheBald: While general purpose engines can be useful for most developers, I think engines that target niche genres could serve that market better.
The problem is that often a special purpose engine, like RPG Maker, will have its own notions of how the game should play, and those might be at odds with the game the developer wants to make.

For example, Rxcovery, a SaGa-like game made with RPG Maker, had to override many of the game mechanics. Although there's no XP based leveling (you gain stats after battle SaGa-style), there's still a level (always 1) shown in the save/load screen. Furthermore, the game shows your current HP between battles, despite the fact that you always recover all HP after battle. Also, the developers weren't able to get mid-battle technique sparking to work.
Post edited June 02, 2021 by dtgreene
avatar
dtgreene: The problem is that often a special purpose engine, like RPG Maker, will have its own notions of how the game should play, and those might be at odds with the game the developer wants to make.

For example, Rxcovery, a SaGa-like game made with RPG Maker, had to override many of the game mechanics. Although there's no XP based leveling (you gain stats after battle SaGa-style), there's still a level (always 1) shown in the save/load screen. Furthermore, the game shows your current HP between battles, despite the fact that you always recover all HP after battle. Also, the developers weren't able to get mid-battle technique sparking to work.
You are right about RPG Maker but it's problem lies in being TOO niche. RPG Maker is really a JRPG Maker, if even that. The strength of niche engine is being optimised. For example first person shooters have been made successfully by simply modding an existing FPS game and not even an engine. Interactive novel is another genre that could be served by dedicated engine since there isn't much difference in gameplay wise. The biggest disadvantage of niche engines is the lack of ability to break out of the mold. Like with RPG Maker. General purpose engines let you mix different genres and even prototype concepts that haven't been tried yet. I'm not saying that niche engines are the best. I'm just saying that it all depends on the situation.
avatar
LiefLayer: In the meantime just read this (from the last version of godot c#)
avatar
ConanTheBald: That's not last version of Godot. I just downloaded Godot with Mono and there's no such warning.
C# support has been since 2017, as I already said. Last year MS sponsored C# overhaul which resulted C# being faster in some operations than the built in GDScript. You might have gotten that warning if you downoaded nightly build while that overhaul was in progress last year. If you just downloaded stable version from the official site you wouldn't have gotten that warning nor any other issues.

avatar
kohlrak: That's a good sign. When using C++ do you have to use things like "extern "C"" or anything like that? That'd be another dead giveaway.
avatar
ConanTheBald: I don't use C++ with Godot so I don't know. There are extensions for C++ though.
Here's something on the topic that might help:
https://gamefromscratch.com/godot-with-c/
https://github.com/godotengine/gdnative-demos
Found what i was looking for following your first link. I was right. godot's going to have a problem with C++ in the future (won't be godot's fault). Basically, it's just using the C interface.
avatar
kohlrak: The game on my todo list to make is going to require i do all that myself, 'cause these engines won't have the features I actually need. Dealing with the path finding is one of those reasons i'm sitting here playing tames and talking on the forums rather than completing the project, so i get where you're coming from. I have a few ideas that could work.
I regularly get away from the project and enjoy life so I wouldn't stress out too much. This means playing games, hangin out in some forums and daily walk in nature. I also use tricks and make compromises to make my life easier. For example, I have all civilian units pass right through each other so they wouldn't have to re-calculate every time someone crosses their path or when they come to close proximity of one another.
I don't have that luxury with my project, unfortunately: everything goes through everything. I'm still thinking up the algorithms, though.

My goal is not to force Godot on others. I have used it myself only for couple of years now. By any means it's not a perfect engine. At least not for everyone.

I'm not an expert on game engines. I have tried some but flight & space simulations aren't my cup of tea really.
Gimbal Lock is where if you lkook or move in a certain axis you get locked do it unable to escape. IT happens as a result of sequential rotations per axis.
While general purpose engines can be useful for most developers, I think engines that target niche genres could serve that market better. It's like with Swiss army knife, it can do all sort of things but the question is - how well? Then again, in many situations you don't need super special tools for everything and just need the job done. Other approach would be to be able to turn off features that aren't being used, like Godot for example. Godot can handle 3D pretty well but it depends on the scale. Godot has it's limits. You could use tricks to bypass these limits but would you want to? Another thing is that Godot isn't really a AAA engine. Then again, neither is Unity. For 3D (especially with large scale) I'd use engine that is well optimised for 3D, like Unreal. Only thing with Unreal is that people say v5 isn't as good and optimised yet as v4. Apparently there are huge performance differences even when loading the exact same scenes. Which means I'd stick with Unreal 4 for now. It also depends how long your current project takes, so maybe once you finish with that, Unreal 5 could be mature by then.
Wirth's Law is a huge problem, these days, and all these engines are suffering from it, even the "AAA Engines."
Another thing to consider is what are exactly your ideas and your level of coding skills (that seem to bypass mine by far). You could get away even with 2D by faking 3D like they did it in the old days. Or if you're comfortable, you could use some libraries, frameworks or create your own engine. There are many things to consider when choosing a game engine so I can only make guesses. Here's a list of different game engines.
Thank you for the list. Most of my experience is outside of game development itself, so I think if you take your time analyze some of the things and ask yourself "how the hell does this work?" you'll find some interesting answers. I used to think path finding was hard, but in reality it's usually just chuking outthe world and checking all possible paths for collisions then figuring out which one is shortest.

avatar
dtgreene: I might prefer to optimize my game for llvmpipe, or for the Raspberry Pi, which would be a good way to ensure that lower end modern systems would be able to handle it, instead of optimizing for the latest and greatest.
Not a bad idea, but not always true. Think of SSE on x86 for example. If you only plan on supporting x86, aiming for highest SSE support will make the game run way faster on the hardware that supports it, becaue those instructions can be taken advantage of, where as if you aim for the FPU, it'll take a performance hit because FPU code is incompatible with SSE. A simpler (but silly) example would be x86's bswap instruction: x86 has it as a single instruction, while other CPUs need manual amanipulation of the data on the byte level, which will require 3 operations for 2 bytes, 5 operations for 4 bytes, 9 operations for 8 bytes, and so forth, while it's always 1 for x86 if you take advantage of the present instruction.

The problem is that often a special purpose engine, like RPG Maker, will have its own notions of how the game should play, and those might be at odds with the game the developer wants to make.

For example, Rxcovery, a SaGa-like game made with RPG Maker, had to override many of the game mechanics. Although there's no XP based leveling (you gain stats after battle SaGa-style), there's still a level (always 1) shown in the save/load screen. Furthermore, the game shows your current HP between battles, despite the fact that you always recover all HP after battle. Also, the developers weren't able to get mid-battle technique sparking to work.
As he said, RPG maker is too niche. It's nice, though, for people who don't want to learn coding (which is probably the majority of RPG maker users), since RPGs are pretty easy to make your own engine for (collisions, graphics, controls, maybe pathfinding if you're doing things the "Tales Of way" with overworld enemies instead of random encounters).

That said, there are niche engines for those other RPG types. Like here (though this one won't meet your standards).
avatar
dtgreene: I might prefer to optimize my game for llvmpipe, or for the Raspberry Pi, which would be a good way to ensure that lower end modern systems would be able to handle it, instead of optimizing for the latest and greatest.
avatar
kohlrak: Not a bad idea, but not always true. Think of SSE on x86 for example. If you only plan on supporting x86, aiming for highest SSE support will make the game run way faster on the hardware that supports it, becaue those instructions can be taken advantage of, where as if you aim for the FPU, it'll take a performance hit because FPU code is incompatible with SSE. A simpler (but silly) example would be x86's bswap instruction: x86 has it as a single instruction, while other CPUs need manual amanipulation of the data on the byte level, which will require 3 operations for 2 bytes, 5 operations for 4 bytes, 9 operations for 8 bytes, and so forth, while it's always 1 for x86 if you take advantage of the present instruction.
Thing is, I would be trusting that either:
* LLVMpipe knows how to optimize on the target machine, which seems quite likely (it would emit SSE instructions on x86, but not on ARM, for example)
* There's a GPU with drivers that are not worse than LLVMpipe (a situation where this is violated would likely be quite rare, something like a Threadripper CPU with some ancient used $20 GPU that's only needed because the system won't boot or display video otherwise; such a computer is unlikely to be a gaming computer)
avatar
kohlrak: Not a bad idea, but not always true. Think of SSE on x86 for example. If you only plan on supporting x86, aiming for highest SSE support will make the game run way faster on the hardware that supports it, becaue those instructions can be taken advantage of, where as if you aim for the FPU, it'll take a performance hit because FPU code is incompatible with SSE. A simpler (but silly) example would be x86's bswap instruction: x86 has it as a single instruction, while other CPUs need manual amanipulation of the data on the byte level, which will require 3 operations for 2 bytes, 5 operations for 4 bytes, 9 operations for 8 bytes, and so forth, while it's always 1 for x86 if you take advantage of the present instruction.
avatar
dtgreene: Thing is, I would be trusting that either:
* LLVMpipe knows how to optimize on the target machine, which seems quite likely (it would emit SSE instructions on x86, but not on ARM, for example)
It is not, as far as i've seen, which is no surprise. If you ever build a compiler you'll see how hard it is. What you want to do is assume such technology, anyway. So, for example, if you want to optimize with SSE, you build your whole program to take advantage of SSE (which means structuring your data a certain way [floats must be packed, else FPU is faster]) and you have functions that use the SSE intrinsics or inline assembly. For ARM, you have a fallback, because 90% of it is data organization and the algorithm for targeting SSE doesn't break FPU versions. This sounds like alot more work than it really is in practice. However, this is only an example, since i doubt you'll end up benefitting form SSE for your RPG focus (there just isn't enough floats, unless you're going full 3d). Depending on your idea (are we looking at alot of overworld objects?), you could take advantage of MMX, though, which is the int version of SSE.

If you're aiming to make something like Stranger of Sword City, some of the animations use FPU, and some of your calculations might, but it doesn't justify SSE (you're walking on mostly single objects and there doesn't appear to be much on the screen, and your calculations will scalar anyway for battles). If you can avoid some of the weird resizing of images that they do to simulate breathing (which you might be able to throw on the GPU using OpenGL), you might be able to benefit from MMX for some of the live2d-style stuff. Your biggest optimization, however, is going to be keeping your floors small and doing occlusion. The games of this style usually have things thrown in some kind of scripting engine that just eats up the CPU doing stuff that doesn't need to be scripted. Since you plan on going open source, the benefits of scripting drop significantly (which means use llvm instead of llvmpipe and distribute via source instead of compiled binaries). Use SO/DLL files if you have *ALOT* of complex enemy animations (i doubt you will have 1000 or more, so this is moot). Maps can easily be of the style of something like this:
usigned short xsize (really shouldn't have more than 65526 tiles
unsigned short ysize
object_pallet (null terminated, and event triggers would also be defined here, and this would be mostly pointers and the strings like path to textures and such would be in miscdata)
possible_enemy_encounter_data (null terminated, and this is assuming you're doing random encounters)
map
miscdata
Where object_pallet data would look like this (go ahead and preload all the images, instead of accessing the hard drive per battle):

unsigned char id (must be above 0)
char* path_to_texture (this ends up in misc data to keep everything nicely aligned)
uint32 attribute_bitmask (things like whether or not in impedes movment or any other boolean values you want to turn into a bitmask for efficiency)
Of course these data files can be made any number of ways from using an assembly source (which is cleanest if you can get it to work) or something like a C program and script to compile the data files at compile time (not as clean or as easy, but you're less likely to fight with your toolkit). Unfortunately this is a common problem, and the best solution i've seen is using fasm, but that only works if you only compile on x86. I'm gonna see if there's a good solution for this incase you need a more practical example.
* There's a GPU with drivers that are not worse than LLVMpipe (a situation where this is violated would likely be quite rare, something like a Threadripper CPU with some ancient used $20 GPU that's only needed because the system won't boot or display video otherwise; such a computer is unlikely to be a gaming computer)
I just realized LLVMpipe is a runtime compilation via LLVM (why one wants to do this is beyond me, because it's still the same LLVM). The trick to this is that, unless you're compiling shaders (which i doubt since you're trying to target cross platform), the drivers aren't going to matter, anyway. The biggest problem you should be focused on as the coder should be the CPU side, anyway. Most of your game code should be here.
avatar
dtgreene: Thing is, I would be trusting that either:
* LLVMpipe knows how to optimize on the target machine, which seems quite likely (it would emit SSE instructions on x86, but not on ARM, for example)
avatar
kohlrak: It is not, as far as i've seen, which is no surprise. If you ever build a compiler you'll see how hard it is. What you want to do is assume such technology, anyway. So, for example, if you want to optimize with SSE, you build your whole program to take advantage of SSE (which means structuring your data a certain way [floats must be packed, else FPU is faster]) and you have functions that use the SSE intrinsics or inline assembly. For ARM, you have a fallback, because 90% of it is data organization and the algorithm for targeting SSE doesn't break FPU versions. This sounds like alot more work than it really is in practice. However, this is only an example, since i doubt you'll end up benefitting form SSE for your RPG focus (there just isn't enough floats, unless you're going full 3d). Depending on your idea (are we looking at alot of overworld objects?), you could take advantage of MMX, though, which is the int version of SSE.
Worth noting that LLVMpipe:
* Generates code at runtime, so it can check to see if the CPU supports such features before it generates code
* Runs code meant for GPUs. This means that said code (or "shaders", as they're usually called) is written to be heavily parallelized (or rather, in the case of a fragment shader, the code is executed once per pixel in parallel).
* Since it's meant for GPUs, on a system with a GPU (excluding pathological cases like the threadripper with $20 GPU), the code can run there and benefit from hardware acceleration.

avatar
kohlrak: I just realized LLVMpipe is a runtime compilation via LLVM (why one wants to do this is beyond me, because it's still the same LLVM). The trick to this is that, unless you're compiling shaders (which i doubt since you're trying to target cross platform), the drivers aren't going to matter, anyway. The biggest problem you should be focused on as the coder should be the CPU side, anyway. Most of your game code should be here.
In this case, LLVMpipe is used to compile shaders (when there's no GPU driver present), and functions as a lowest common denominator (unlikely to find a semi-modern non-headless system that's worse at graphics), while the rest of the game is compiled for the CPU with gcc or clang (or something like rustc). (Or I could use Python as the main language, but that's equivalent to writing everything in a scripting language.)

avatar
kohlrak: Of course these data files can be made any number of ways from using an assembly source (which is cleanest if you can get it to work) or something like a C program and script to compile the data files at compile time (not as clean or as easy, but you're less likely to fight with your toolkit). Unfortunately this is a common problem, and the best solution i've seen is using fasm, but that only works if you only compile on x86. I'm gonna see if there's a good solution for this incase you need a more practical example.
To put shader source code into the executable, I was able to make a cross-platform assembly source file that simply includes the shaders into its binary; this should work for other files as well, if they're to be statically compiled into the executable. (The .S file is platform independent because it doesn't include any actual assembly language instructions; there's no guarantee that it would work in a different toolchain (like MSVC instead of gcc or clang).

Also, Rust has a nice macro called include_bytes! that allows you to do the same thing there (and there's also include_str!, if the function to load shader source requires a rust-style string as input).
Post edited June 03, 2021 by dtgreene
avatar
kohlrak: It is not, as far as i've seen, which is no surprise. If you ever build a compiler you'll see how hard it is. What you want to do is assume such technology, anyway. So, for example, if you want to optimize with SSE, you build your whole program to take advantage of SSE (which means structuring your data a certain way [floats must be packed, else FPU is faster]) and you have functions that use the SSE intrinsics or inline assembly. For ARM, you have a fallback, because 90% of it is data organization and the algorithm for targeting SSE doesn't break FPU versions. This sounds like alot more work than it really is in practice. However, this is only an example, since i doubt you'll end up benefitting form SSE for your RPG focus (there just isn't enough floats, unless you're going full 3d). Depending on your idea (are we looking at alot of overworld objects?), you could take advantage of MMX, though, which is the int version of SSE.
avatar
dtgreene: Worth noting that LLVMpipe:
* Generates code at runtime, so it can check to see if the CPU supports such features before it generates code
Unless this has some drastic changes from standard LLVM, i don't expect much enhancement.
* Runs code meant for GPUs. This means that said code (or "shaders", as they're usually called) is written to be heavily parallelized (or rather, in the case of a fragment shader, the code is executed once per pixel in parallel).
This might be useful, but i think it might be smarter to compile them once (on install) and be done with it (until hardware change).
* Since it's meant for GPUs, on a system with a GPU (excluding pathological cases like the threadripper with $20 GPU), the code can run there and benefit from hardware acceleration.
Given that, unless you're going full 3d i doubt you'll benefit from this, unless i'm misunderstanding where you're going, of course. You seem to have a preference for 2d, which doesn't benefit much (if at all) from shaders. You can get some benefits from throwing your textures onto 3d polies though for resizing if you need to do that, though, and maybe from transparency. I feel like you'll be able to benchmark the two ideas out when you get the graphics up and running.

In this case, LLVMpipe is used to compile shaders (when there's no GPU driver present), and functions as a lowest common denominator (unlikely to find a semi-modern non-headless system that's worse at graphics), while the rest of the game is compiled for the CPU with gcc or clang (or something like rustc). (Or I could use Python as the main language, but that's equivalent to writing everything in a scripting language.)
My advice to you in this regard is C. You might get some convenience improvements with C++, but not much and you could run into issues with the "binary compatibility" bs coming up (might not, too, it just really depends on how OSes bother to handle the drama [i'm honestly hoping they straight up reject ISO's new standard whenever it comes out, 'cause it's absolutely not necessary to change this, and they're probably doing it so they can add all the internal junk that java, C#, and such languages have that tends to be more bloat than benefit]).

To put shader source code into the executable, I was able to make a cross-platform assembly source file that simply includes the shaders into its binary; this should work for other files as well, if they're to be statically compiled into the executable. (The .S file is platform independent because it doesn't include any actual assembly language instructions; there's no guarantee that it would work in a different toolchain (like MSVC instead of gcc or clang).
I'm actually trying to write an example for you in php right now. I thought about this before, but I had some problems getting LLVM on my tablet (android, ARM) to take this. It could be my lack of experience with LLVM/GCC. I'm curious what you did, 'cause if it's cleaner it'll probably be less hacky than my current stumbling block of trying to figure out how to make php output an int (as an int instead of string).
Also, Rust has a nice macro called include_bytes! that allows you to do the same thing there (and there's also include_str!, if the function to load shader source requires a rust-style string as input).
Same with most assemblers, but the problem being the assemblers might not be available on the target machine. I don't suggest rust until it stabalizes a bit more. Unless you're working for a company with a massive budget, I highly recommend people distance themselves from languages and toolchains that are subject to have massive changes over the next 10 years. There's too much "we don't like this idea" going into how we're developing and changing languages (the indentation BS of python is probably the easiest example, along with java's verbose and annoying exception handling requirements). It's nice to have cool new features that are useful, but, at the same time, you gotta worry about new crazy of deprecating features in order to curb certain development styles that are deemed "evil" or "bad." We seem to think we can magically force people to be non-lazy, and all we're doing is just annoying people to the degree that they do the bare minimum to shut the compiler/interpreter up when we're just trying to test something basic to see if it works. (Oh no! This non-production programming student isn't checking to see if the file already exists! We can't have that! I know they're just trying to learn the difference between string inptut and output vs binary input and output, but we just gotta make sure they make sure the file's deleted first, because they might not have wanted to overwrite the old output!)
avatar
kohlrak: Given that, unless you're going full 3d i doubt you'll benefit from this, unless i'm misunderstanding where you're going, of course. You seem to have a preference for 2d, which doesn't benefit much (if at all) from shaders. You can get some benefits from throwing your textures onto 3d polies though for resizing if you need to do that, though, and maybe from transparency. I feel like you'll be able to benchmark the two ideas out when you get the graphics up and running.
The basic architecture of the graphics system looks something like this:
* The game uses tilemaps. There's a nametable (to use a term that NES homebrew developers use) that indicates which tiles go where.
* Two triangles are rendered that fill the screen, and the vertex shader is trivial.
* The fragment shader, however, is not; for each pixel, it determines which tile from the tileset should go in this spot (by looking up the nametable), and which pixel from the tile belongs on this pixel on screen. In particular, this shader does most of the work.
* In particular, there are no textures in the usual sense; it's all done in the fragment shader. (Also worth noting that this provides the option of having multiple layers.)