kohlrak: That's why i said positive numbers.
ZFR: Ah, I see. Missed that part, sorry. Though to be fair, it's still implementation depended. Technically one could invent a weird binary representation for which this wouldn't work.
Then again, it still wouldn't be standard binary: binary representation is standardized. You could argue that I could come up with a system where we start from 0, then go 2, 3, 4, 1, 5, 6, 7, 9, and finally 8 before reaching 10, but I can't exactly call that decimal either. Remember, too, that binary represents the underlying hardware, not actually being the underlying hardware. "and" and "or" and such are expected to work the same, regardless of whether 1 represents negative voltage or positive voltage (which is something you deal with when you play with the things that i play with). So if you have reversing polarity or something like that, your "and" and "or" operations, as well as "add" and "subtract" are supposed to play with standard representation, otherwise people don't say it's a quirk of your processor, but "a misprint in the manual/specification" or "a bug."
Now let us suppose that representation actually affects the hardware and on some machines a byte is 7 bits (which is the case, believe it or not), meaning that 1MB on such a machine is 0.875 of a megabyte on a normal machine, which is why we don't allow such machines to communicate with each other, unless we have a "compatibility layer" to keep standards in place. Such a compatibility layer would also apply to the programming language. Sure, the C++ standard, for example, doesn't pre-suppose that, but anyone writing a C/C++ compiler will surely understand that any hope of compatibility with existing code would require that, as people use bitwise operations for flags regularly, which would be even more common on such a system since, odds are, we'd be looking at some sort of microcontroller.
The kind of thing you can't rely on is the stuff that's not standardized, for example an "int" isn't actually universally a 32bit number. It used to be 16bits (now called "short (int)"), so the old test for negatives used to be >>15, which worked. >>31 would still work (since this standardizes to "signed arithmetic right shift" or "sar") on most systems (since not all archs actually have signed right shift). However, it would break if "long long int" became the standard "int" (which is unlikely thanks to x86's "64bit" instruction encodings actually being limited to 40 bits).
EDIT: So, if you want to make RAX full of 1s there are many methods, but i would argue that the following is the smallest method:
xor eax, eax ;eax is 0 extended to rax
not rax
and the fastest method is:
and eax, 0
not rax
EDIT again: There might be a smaller way by mangling rflags, but, odds are, you don't want to do that, since you'll likely be using rflags in your calculation.
But there's a big picture, to this, too. It's not that you need to use it regularly, so much as awareness is a big deal. When I was in highschool, i took a C++ class for easy grades and hopes that somehow that i could get some sort of certification out of it (i was a dumb kid, ok?). Since i was already good at it, i often helped the teacher manage the half a room full of questions. I noticed, especially when students kept doing & instead of && and | instead of ||, it was causing alot of trouble. They didn't see the error, since they didn't have syntax errors, and thus the compilation passed and they were none-the-wiser to their mistakes. I was able to explain that these are operators that the teacher didn't teach, which is why they behave strangely. I made the argument to the teacher that if he had taught these operators, this situation could be avoided in the future. Unfortunately, he was a bonehead.