Is sign magnitude used to represent negative numbers?

I understand that two's complement is used to indicate a negative number, but there is also a method of using the magnitude of the sign. Is the magnitude of the sign still used to represent negative numbers? If not, where was it used before? And how is a machine that interprets negative numbers using two's complement, able to communicate and understand another machine that uses the magnitude of the sign instead?

+3


source to share


1 answer


Yes, sign size is often used today, although not where you might expect. For example, IEEE floating point uses one "sign" bit to indicate positive or negative. (As a result, IEEE floating point numbers can be -0.) The sign of magnitude is not commonly used today for integers.



Communication between two machines using different number representations is a problem if they both try to use their own encoding format. If a common format is used to exchange information, no problem. For example, a machine that uses two's complement can easily construct a number using signed value encoding (and vice versa). These days, different machines are more likely to communicate using the ASCII representation of a number (like JSON or XML) or using a completely different binary encoding (like ASN.1, zigzag encoding , etc.).

+6


source







All Articles