Is there a type type with more capacity than u_long / UInt64 in Swift?
A credit card number is not a number in a meaningful mathematical sense. It is a sequence of numbers, and CC should be treated like text, like a telephone number. One of the pressing problems of using a fixed length integer value is that the code cannot simultaneously detect leading and trailing zeros from "no more numbers present".
Use a string or specific (custom) type representing the CC number, perhaps using an internal string. The length of the number (in base-10) is then trivially equal to the number of digits: this is the length of the underlying string.
The CC number (represented by the string bonafide) can subsequently be encoded into the appropriate binary representation if required (and when).
source to share
I am working on BigNumber library with which you can do large number calculations. In fact the library is based on the GNU Multiple Precision (GMP) library (see https://gmplib.org ) and I wrote an Objective-C / Swift wrapper.A lot of whole math is possible nowadays, including a lot of operator overloading. Sample code looks like this:
var err : NSError?
var bi1 = BigInt(nr: 12468642135797531)
var bi2 = BigInt(nr: "12345678901011121314151617181920", error: &err)
var res = bi1 * bi2
println("Multiply 2 BigInts: bi1 * bi2 = \(res.toString())")
that leads to:
Multiply 2 BigInts: bi1 * bi2 = 153933852140173822960829726365674325601913839520
You can find the library at: https://github.com/githotto/osxgmp
I think its pretty easy to do "credit card" math with even numbers having a lot more 28 digits.
source to share
Another approach would be to work with strings and define mathematical operators to work with strings:
func +(lhs: String, rhs: Int8) -> String
func +(lhs: String, rhs: Int16) -> String
func +(lhs: String, rhs: Int32) -> String
func +(lhs: String, rhs: Int64) -> String
func +(lhs: String, rhs: String) -> String
// ... other operators
This has the advantage of theoretically allowing an unlimited number of digits, but has the disadvantage that strings may not always represent numbers.
source to share
In 2019, the take is very large.
import Foundation
print("Double \(Double.greatestFiniteMagnitude)")
print("NSDecimalNumber \(NSDecimalNumber.maximum)")
let countByHand = "\(NSDecimalNumber.maximum)".count - 1
print("NSDecimalNumber zeroes \(countByHand)")
print("Float \(Float.greatestFiniteMagnitude)")
print("UInt \(UInt.max)")
produces
Double 1.7976931348623157e+308
NSDecimalNumber 3402823669209384634633746074317682114550000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
NSDecimalNumber zeroes 165
Float 3.4028235e+38
UInt 18446744073709551615
I'm not sure if it has changed since 2014 or ...?
source to share