It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
Was really convenient for my virtual machine. In theory, huge bitmask of "flags" could be optimized later when "web assembly" becomes mainstream. Sure, switches are prettier, but a nice binary tree will work faster, which is also what my VM was doing in addition to emulating bitwise operations: opcodes were in a tree that resembled binary search.
avatar
kohlrak: Was really convenient for my virtual machine. In theory, huge bitmask of "flags" could be optimized later when "web assembly" becomes mainstream. Sure, switches are prettier, but a nice binary tree will work faster, which is also what my VM was doing in addition to emulating bitwise operations: opcodes were in a tree that resembled binary search.
You are certainly not wrong. It's just that in the vast majority of JS applications, the problem is not in the slowness of logical operations. =)
avatar
USERNAME:kohlrak#Q&_^Q&Q#GROUP:4#Q&_^Q&Q#LINK:16#Q&_^Q&Q#Was really convenient for my virtual machine. In theory, huge bitmask of "flags" could be optimized later when "web assembly" becomes mainstream. Sure, switches are prettier, but a nice binary tree will work faster, which is also what my VM was doing in addition to emulating bitwise operations: opcodes were in a tree that resembled binary search.#Q&_^Q&Q#LINK:16#Q&_^Q&Q#
avatar
Not yet, but, presumably, it will be in the future (when javascript is "compilable"). I like to stay ahead of the game, even if it is Javascript that we're talking about, and I hate javascript. Not that it matters much, as i don't code professionally. Eventually, javascript will be optimized the same way you optimize (or, rather, don't even though you should) other applications.
avatar
avatar
kohlrak: Not yet, but, presumably, it will be in the future (when javascript is "compilable"). I like to stay ahead of the game, even if it is Javascript that we're talking about, and I hate javascript. Not that it matters much, as i don't code professionally. Eventually, javascript will be optimized the same way you optimize (or, rather, don't even though you should) other applications.
JS has made major strides in the past several years. And the environments (browsers and Node) have also done a tremendous job when it comes to optimization. I came from a different stack, but now I absolutely love JS and don't want to code in anything else. =)
avatar
USERNAME:kohlrak#Q&_^Q&Q#GROUP:4#Q&_^Q&Q#LINK:18#Q&_^Q&Q#Not yet, but, presumably, it will be in the future (when javascript is "compilable"). I like to stay ahead of the game, even if it is Javascript that we're talking about, and I hate javascript. Not that it matters much, as i don't code professionally. Eventually, javascript will be optimized the same way you optimize (or, rather, don't even though you should) other applications.#Q&_^Q&Q#LINK:18#Q&_^Q&Q#
avatar
I remember that argument before said about C++ compilers. Meanwhile the "optimized assembly" looked something like this:

mov rax, [rsp+8]
push rdi
mov rax[rsp+16]

when

push rdi

would've done the exact same thing, but not taken as much space, nor used as many cycles. This wasn't a real problem, until you realized we're only looking at 1 function (the first code of the function, so executed every time), and this function would've been called in a loop, and this function didn't do much and was put in a function for "readability." So a 20 cycle function became a 30 cycle function, and it would be looped.

On the flip side, after all this, today, unlike 10 years ago when i saw these kinds of optimization problems despite teh "large strides compilers have made," the compilers don't make those kinds of mistakes anymore. The big mistake compilers make now is hinted at by another thread where people are complaining about the RAM requirements of games: unused code is getting included in binaries, 'cause your average dev can't "optimize compile" these days. Remembering that JS doesn't compile yet, well... Take some time to think about that.
avatar
avatar
kohlrak: I remember that argument before said about C++ compilers. Meanwhile the "optimized assembly" looked something like this:

mov rax, [rsp+8]
push rdi
mov rax[rsp+16]

when

push rdi

would've done the exact same thing, but not taken as much space, nor used as many cycles. This wasn't a real problem, until you realized we're only looking at 1 function (the first code of the function, so executed every time), and this function would've been called in a loop, and this function didn't do much and was put in a function for "readability." So a 20 cycle function became a 30 cycle function, and it would be looped.

On the flip side, after all this, today, unlike 10 years ago when i saw these kinds of optimization problems despite teh "large strides compilers have made," the compilers don't make those kinds of mistakes anymore. The big mistake compilers make now is hinted at by another thread where people are complaining about the RAM requirements of games: unused code is getting included in binaries, 'cause your average dev can't "optimize compile" these days. Remembering that JS doesn't compile yet, well... Take some time to think about that.
I am not ready to solve problems that will appear in 10 years. My job is to write modern web applications. =) I like to think I'm not terribly bad at it. As the language and the related technologies evolve I have to, of course, keep current. You may very well be right and in another 10 years everything will be different and the tasks that I will face will require entirely different approaches and solutions. Be as it may, we will arrive there one step at a time. In the mean time there is plenty to know and do and learn already.

BTW, I think it's awesome that people such as yourself are playing with all sorts of different technologies. The Assembly Language is really cool and awesome at what it does. I admire you for figuring it out and doing projects in it. My own hobbies and work lie in a different area, but I'm still perfectly capable of appreciating the neat stuff that others do.
avatar
USERNAME:kohlrak#Q&_^Q&Q#GROUP:4#Q&_^Q&Q#LINK:20#Q&_^Q&Q#I remember that argument before said about C++ compilers. Meanwhile the "optimized assembly" looked something like this:

mov rax, [rsp+8]
push rdi
mov rax[rsp+16]

when

push rdi

would've done the exact same thing, but not taken as much space, nor used as many cycles. This wasn't a real problem, until you realized we're only looking at 1 function (the first code of the function, so executed every time), and this function would've been called in a loop, and this function didn't do much and was put in a function for "readability." So a 20 cycle function became a 30 cycle function, and it would be looped.

On the flip side, after all this, today, unlike 10 years ago when i saw these kinds of optimization problems despite teh "large strides compilers have made," the compilers don't make those kinds of mistakes anymore. The big mistake compilers make now is hinted at by another thread where people are complaining about the RAM requirements of games: unused code is getting included in binaries, 'cause your average dev can't "optimize compile" these days. Remembering that JS doesn't compile yet, well... Take some time to think about that.#Q&_^Q&Q#LINK:20#Q&_^Q&Q#
avatar
That's the the kicker: the technology coming out to fix the optimization problems you're seeing now is half-out. It's not 10 years away before you start seeing the shifts. Thus, the problems i'm talking about aren't 10 years away. Google already compiles javascript, which is what lead to it's javascript compatibility issues. The time is now. The solutions to these problems are the ones that're 10 years away. So, effectively, you're coding for 10 years in the future, instead of right now.
BTW, I think it's awesome that people such as yourself are playing with all sorts of different technologies. The Assembly Language is really cool and awesome at what it does. I admire you for figuring it out and doing projects in it. My own hobbies and work lie in a different area, but I'm still perfectly capable of appreciating the neat stuff that others do.
I would not argue that one should code in assembly all the time like i do, but the problem i outlined above is easily solved by someone who knows assembly but is coding in the language that had the problem (C++). Take the following example:

int add(int x, int y){ return x+y; }

It would've been coded in as:

_iZ7addii:
sub rsp, 16
mov [rsp], rdi
mov [rsp+8], rsi
mov rax, rdi
add rax, rsi
add rsp, 16
ret

Which looks nuts, right? The reason it's hard to read, is you don't know why it's doing what it's doing, which is not what it was supposed to be doing (i'd honestly call the old behavior a bug in the optimization algorithms). Now, if you catch what it's doing, half the code of this function is preserving the two parameters onto the stack (rsp is the stack pointer), because you're passing "by value" which is what you're taught to do (which is unnecessary since we're not changing either parameter). Now, if you pass "by reference" any variable that's not getting changed, it won't preserve them, which isn't a problem since you're not changing them.

int add(int &x, int &y){ return x+y; }

This becomes:

_Z7addii:
mov rax, rdi
add rax, rsi
ret

Which would be further optimized to be "inline" instead of as a function, which would further remove the "mov rax, rdi" and "ret" in favor of simply doing the math and not storing it in rax (the return register). Fortunately, thanks to the actual strides in compiler technology (instead of the ones believed to exist, but actually don't), the compilers actually do this optimization when you set them to optimize, instead of being really derpy, so the solution isn't even necessary, anymore, but it was 10 years ago. That said, no one but a handful of people used the solution 10 years ago. I we were to look at some games with something like IDA pro (which is illegal if you release anything you see or any changes you make), we'd see these kinds of errors in the code where the compiler wrote code that took up time but did nothing.
avatar
Why?

avatar
Flags? I'm using this all the time...
avatar
I'm looking into creating one, but no promises when. Since I'm running out of ideas on how to continue my esoteric ones with original ideas, my next one will probably include a mix of esoteric and non-esoteric languages.
Might still take some time to do.
avatar
avatar
kohlrak: I was looking for "&=1;"
But your solution assumes a particular representation of integers in binary. If for examples numbers are represented using ones' complement then "&=1" will no longer work for negative numbers. Alaric's solution is universal as long as the mod operator is implemented correctly.
avatar
kohlrak: I was looking for "&=1;"
avatar
ZFR: But your solution assumes a particular representation of integers in binary. If for examples numbers are represented using ones' complement then "&=1" will no longer work for negative numbers. Alaric's solution is universal as long as the mod operator is implemented correctly.
That's why i said positive numbers. To be fair, for the purpose i described, his solution was more universal. However, for the intended goal, the one i was looking for was optimal, since & is faster than %. This was more of a flaw of my challenge than my answer, which i acknowledged.
avatar
kohlrak: That's why i said positive numbers.
Ah, I see. Missed that part, sorry. Though to be fair, it's still implementation depended. Technically one could invent a weird binary representation for which this wouldn't work.
avatar
kohlrak: That's why i said positive numbers.
avatar
ZFR: Ah, I see. Missed that part, sorry. Though to be fair, it's still implementation depended. Technically one could invent a weird binary representation for which this wouldn't work.
Then again, it still wouldn't be standard binary: binary representation is standardized. You could argue that I could come up with a system where we start from 0, then go 2, 3, 4, 1, 5, 6, 7, 9, and finally 8 before reaching 10, but I can't exactly call that decimal either. Remember, too, that binary represents the underlying hardware, not actually being the underlying hardware. "and" and "or" and such are expected to work the same, regardless of whether 1 represents negative voltage or positive voltage (which is something you deal with when you play with the things that i play with). So if you have reversing polarity or something like that, your "and" and "or" operations, as well as "add" and "subtract" are supposed to play with standard representation, otherwise people don't say it's a quirk of your processor, but "a misprint in the manual/specification" or "a bug."

Now let us suppose that representation actually affects the hardware and on some machines a byte is 7 bits (which is the case, believe it or not), meaning that 1MB on such a machine is 0.875 of a megabyte on a normal machine, which is why we don't allow such machines to communicate with each other, unless we have a "compatibility layer" to keep standards in place. Such a compatibility layer would also apply to the programming language. Sure, the C++ standard, for example, doesn't pre-suppose that, but anyone writing a C/C++ compiler will surely understand that any hope of compatibility with existing code would require that, as people use bitwise operations for flags regularly, which would be even more common on such a system since, odds are, we'd be looking at some sort of microcontroller.

The kind of thing you can't rely on is the stuff that's not standardized, for example an "int" isn't actually universally a 32bit number. It used to be 16bits (now called "short (int)"), so the old test for negatives used to be >>15, which worked. >>31 would still work (since this standardizes to "signed arithmetic right shift" or "sar") on most systems (since not all archs actually have signed right shift). However, it would break if "long long int" became the standard "int" (which is unlikely thanks to x86's "64bit" instruction encodings actually being limited to 40 bits).

EDIT: So, if you want to make RAX full of 1s there are many methods, but i would argue that the following is the smallest method:

xor eax, eax ;eax is 0 extended to rax
not rax

and the fastest method is:
and eax, 0
not rax

EDIT again: There might be a smaller way by mangling rflags, but, odds are, you don't want to do that, since you'll likely be using rflags in your calculation.

But there's a big picture, to this, too. It's not that you need to use it regularly, so much as awareness is a big deal. When I was in highschool, i took a C++ class for easy grades and hopes that somehow that i could get some sort of certification out of it (i was a dumb kid, ok?). Since i was already good at it, i often helped the teacher manage the half a room full of questions. I noticed, especially when students kept doing & instead of && and | instead of ||, it was causing alot of trouble. They didn't see the error, since they didn't have syntax errors, and thus the compilation passed and they were none-the-wiser to their mistakes. I was able to explain that these are operators that the teacher didn't teach, which is why they behave strangely. I made the argument to the teacher that if he had taught these operators, this situation could be avoided in the future. Unfortunately, he was a bonehead.
Post edited June 07, 2018 by kohlrak
avatar
kohlrak: But, if you must. Most a simple puzzle that most coders can't pull off is trying to write a single line function that returns 1 if the number is odd, 0 if the number is even (assuming all positive numbers). Strange puzzle, i know, but there's a lesson in it.
What should the function do if the number is a float and is not an integer? What should it do if passed Infinity or NaN?

(By the way, it seems my second post, which has some floating-point related problems, has been ignored.)
avatar
kohlrak: But, if you must. Most a simple puzzle that most coders can't pull off is trying to write a single line function that returns 1 if the number is odd, 0 if the number is even (assuming all positive numbers). Strange puzzle, i know, but there's a lesson in it.
avatar
dtgreene: What should the function do if the number is a float and is not an integer? What should it do if passed Infinity or NaN?
Infinity and NaN are from float. Now if it's a float, but made for integers, most languages would catch that and either warn or give syntax error. If using a language that doesn't, it's their fault for using such a language and not following the documentation.

But, like i said, i knew the answer i wanted before i gave the puzzle. I wrote the puzzle wrong. The idea was to challenge based on information that's important, yet usually missing from classrooms on the topic.
(By the way, it seems my second post, which has some floating-point related problems, has been ignored.)
It's the real challenge requested, but i wasn't here to look for challenges, but to make them. Then a conversation broke out. Kind of like how when you go to a boxing match, a hockey game breaks out? XD