Convert double to NSDecimalNumber while maintaining precision

I need to convert the results of calculations performed to double

, but I cannot use decimalNumberByMultiplyingBy

or any other function NSDecimalNumber

. I tried to get the exact result in the following ways:

double calc1 = 23.5 * 45.6 * 52.7;  // <-- Correct answer is 56473.32
NSLog(@"calc1 = %.20f", calc1);

      

→ calc1 = 56473.32000000000698491931

NSDecimalNumber *calcDN = (NSDecimalNumber *)[NSDecimalNumber numberWithDouble:calc1];
NSLog(@"calcDN = %@", [calcDN stringValue]);

      

→ calcDN = 56473.32000000001024

NSDecimalNumber *testDN = [[[NSDecimalNumber decimalNumberWithString:@"23.5"] decimalNumberByMultiplyingBy:[NSDecimalNumber decimalNumberWithString:@"45.6"]] decimalNumberByMultiplyingBy:[NSDecimalNumber decimalNumberWithString:@"52.7"]];
NSLog(@"testDN = %@", [testDN stringValue]);

      

→ testDN = 56473.32

I understand that this distinction is related to the corresponding precision.

But here's my question: how can I round this number in the most accurate way, no matter what the initial value might be double

? And if there is a more accurate method for the initial calculation, what is that method?

+3


source to share


2 answers


I would recommend rounding up the number based on the number of digits in yours double

so that it is NSDecimalNumber

truncated to only display the appropriate number of digits, thus eliminating the digits generated by a potential error, ex:

// Get the number of decimal digits in the double
int digits = [self countDigits:calc1];

// Round based on the number of decimal digits in the double
NSDecimalNumberHandler *behavior = [NSDecimalNumberHandler decimalNumberHandlerWithRoundingMode:NSRoundDown scale:digits raiseOnExactness:NO raiseOnOverflow:NO raiseOnUnderflow:NO raiseOnDivideByZero:NO];
NSDecimalNumber *calcDN = (NSDecimalNumber *)[NSDecimalNumber numberWithDouble:calc1];
calcDN = [calcDN decimalNumberByRoundingAccordingToBehavior:behavior];

      



I adapted the method countDigits:

from this answer :

- (int)countDigits:(double)num {
    int rv = 0;
    const double insignificantDigit = 18; // <-- since you want 18 significant digits
    double intpart, fracpart;
    fracpart = modf(num, &intpart); // <-- Breaks num into an integral and a fractional part.

    // While the fractional part is greater than 0.0000001f,
    // multiply it by 10 and count each iteration
    while ((fabs(fracpart) > 0.0000001f) && (rv < insignificantDigit)) {
        num *= 10;
        fracpart = modf(num, &intpart);
        rv++;
    }
    return rv;
}

      

+3


source


Well, you can use double

to represent numbers and accept inaccuracies, or use some other representation of numbers, for example NSDecimalNumber

. It all depends on what the expected values ​​are and what the business needs for accuracy.

If it is really important not to use the arithmetic methods provided NSDecimalNumber

, it is better to control the rounding behavior with NSDecimalNumberHandler

, which is the specific implementation of the protocol NSDecimalNumberBehaviors

. The actual rounding is done using the method decimalNumberByRoundingAccordingToBehavior:

.

Here's a snippet - it's in Swift, but it should be readable:



let behavior = NSDecimalNumberHandler(roundingMode: NSRoundingMode.RoundPlain,
                                             scale: 2,
                                  raiseOnExactness: false,
                                   raiseOnOverflow: false,
                                  raiseOnUnderflow: false,
                               raiseOnDivideByZero: false)

let calcDN : NSDecimalNumber = NSDecimalNumber(double: calc1)
                               .decimalNumberByRoundingAccordingToBehavior(behavior)
calcDN.stringValue // "56473.32"

      

I don't know of any method to improve the accuracy of the actual calculations when using the view double

.

+3


source







All Articles