How to compute a log with a preprocessor
Ok, now for tricking the dirty brute force preprocessor.
From your question, I am assuming that what you actually want is not the total logarithm (which is not even possible in integer arithmetic), but the number of bits required to represent a given number. If we limit ourselves to 32-bit integers, there is a solution for this, although it is not very good.
#define IS_REPRESENTIBLE_IN_D_BITS(D, N) \
(((unsigned long) N >= (1UL << (D - 1)) && (unsigned long) N < (1UL << D)) ? D : -1)
#define BITS_TO_REPRESENT(N) \
(N == 0 ? 1 : (31 \
+ IS_REPRESENTIBLE_IN_D_BITS( 1, N) \
+ IS_REPRESENTIBLE_IN_D_BITS( 2, N) \
+ IS_REPRESENTIBLE_IN_D_BITS( 3, N) \
+ IS_REPRESENTIBLE_IN_D_BITS( 4, N) \
+ IS_REPRESENTIBLE_IN_D_BITS( 5, N) \
+ IS_REPRESENTIBLE_IN_D_BITS( 6, N) \
+ IS_REPRESENTIBLE_IN_D_BITS( 7, N) \
+ IS_REPRESENTIBLE_IN_D_BITS( 8, N) \
+ IS_REPRESENTIBLE_IN_D_BITS( 9, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(10, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(11, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(12, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(13, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(14, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(15, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(16, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(17, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(18, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(19, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(20, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(21, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(22, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(23, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(24, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(25, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(26, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(27, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(28, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(29, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(30, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(31, N) \
+ IS_REPRESENTIBLE_IN_D_BITS(32, N) \
) \
)
The idea is that a number n > 0 has a representation using exactly d bits if and only if n & geq; 2 d & minus; 1 and n 2 <i> d . After handling the n = 0 case on purpose, we just drag and drop this for all 32 possible answers.
A helper macro IS_REPRESENTIBLE_IN_D_BITS(D, N)
will expand to an expression evaluating a value D
if it N
can be represented using exactly D
bits and -1
otherwise. I have defined macros in such a way that the result is & minus; 1 if the answer is no. To compensate for negative terms, I add 31 to the end. If the number cannot be represented in any, ..., 32 bits, then the overall result is & minus; 1, which should help us catch some bugs.
Expression BITS_TO_REPRESENT(42)
is a valid compile-time constant for use in an array length declaration.
All that said, the extra cost of creating an array of 32 elements is long acceptable for many applications, and it saves you some hassle. So I would only use such a trick if I really had to.
Update: . To avoid confusion: This solution does not use a preprocessor to evaluate the "logarithm". The whole preprocessor does text substitution, which you can see if you compile with a switch -E
(at least for GCC). Let's take a look at this code:
int
main()
{
int digits[BITS_TO_REPRESENT(42)];
return 0;
}
It will be preprocessed (warned):
int
main()
{
int digits[(42 == 0 ? 1 : (31 + (((unsigned long) 42 >= (1UL << (1 - 1)) && (unsigned long) 42 < (1UL << 1)) ? 1 : -1) + (((unsigned long) 42 >= (1UL << (2 - 1)) && (unsigned long) 42 < (1UL << 2)) ? 2 : -1) + (((unsigned long) 42 >= (1UL << (3 - 1)) && (unsigned long) 42 < (1UL << 3)) ? 3 : -1) + (((unsigned long) 42 >= (1UL << (4 - 1)) && (unsigned long) 42 < (1UL << 4)) ? 4 : -1) + (((unsigned long) 42 >= (1UL << (5 - 1)) && (unsigned long) 42 < (1UL << 5)) ? 5 : -1) + (((unsigned long) 42 >= (1UL << (6 - 1)) && (unsigned long) 42 < (1UL << 6)) ? 6 : -1) + (((unsigned long) 42 >= (1UL << (7 - 1)) && (unsigned long) 42 < (1UL << 7)) ? 7 : -1) + (((unsigned long) 42 >= (1UL << (8 - 1)) && (unsigned long) 42 < (1UL << 8)) ? 8 : -1) + (((unsigned long) 42 >= (1UL << (9 - 1)) && (unsigned long) 42 < (1UL << 9)) ? 9 : -1) + (((unsigned long) 42 >= (1UL << (10 - 1)) && (unsigned long) 42 < (1UL << 10)) ? 10 : -1) + (((unsigned long) 42 >= (1UL << (11 - 1)) && (unsigned long) 42 < (1UL << 11)) ? 11 : -1) + (((unsigned long) 42 >= (1UL << (12 - 1)) && (unsigned long) 42 < (1UL << 12)) ? 12 : -1) + (((unsigned long) 42 >= (1UL << (13 - 1)) && (unsigned long) 42 < (1UL << 13)) ? 13 : -1) + (((unsigned long) 42 >= (1UL << (14 - 1)) && (unsigned long) 42 < (1UL << 14)) ? 14 : -1) + (((unsigned long) 42 >= (1UL << (15 - 1)) && (unsigned long) 42 < (1UL << 15)) ? 15 : -1) + (((unsigned long) 42 >= (1UL << (16 - 1)) && (unsigned long) 42 < (1UL << 16)) ? 16 : -1) + (((unsigned long) 42 >= (1UL << (17 - 1)) && (unsigned long) 42 < (1UL << 17)) ? 17 : -1) + (((unsigned long) 42 >= (1UL << (18 - 1)) && (unsigned long) 42 < (1UL << 18)) ? 18 : -1) + (((unsigned long) 42 >= (1UL << (19 - 1)) && (unsigned long) 42 < (1UL << 19)) ? 19 : -1) + (((unsigned long) 42 >= (1UL << (20 - 1)) && (unsigned long) 42 < (1UL << 20)) ? 20 : -1) + (((unsigned long) 42 >= (1UL << (21 - 1)) && (unsigned long) 42 < (1UL << 21)) ? 21 : -1) + (((unsigned long) 42 >= (1UL << (22 - 1)) && (unsigned long) 42 < (1UL << 22)) ? 22 : -1) + (((unsigned long) 42 >= (1UL << (23 - 1)) && (unsigned long) 42 < (1UL << 23)) ? 23 : -1) + (((unsigned long) 42 >= (1UL << (24 - 1)) && (unsigned long) 42 < (1UL << 24)) ? 24 : -1) + (((unsigned long) 42 >= (1UL << (25 - 1)) && (unsigned long) 42 < (1UL << 25)) ? 25 : -1) + (((unsigned long) 42 >= (1UL << (26 - 1)) && (unsigned long) 42 < (1UL << 26)) ? 26 : -1) + (((unsigned long) 42 >= (1UL << (27 - 1)) && (unsigned long) 42 < (1UL << 27)) ? 27 : -1) + (((unsigned long) 42 >= (1UL << (28 - 1)) && (unsigned long) 42 < (1UL << 28)) ? 28 : -1) + (((unsigned long) 42 >= (1UL << (29 - 1)) && (unsigned long) 42 < (1UL << 29)) ? 29 : -1) + (((unsigned long) 42 >= (1UL << (30 - 1)) && (unsigned long) 42 < (1UL << 30)) ? 30 : -1) + (((unsigned long) 42 >= (1UL << (31 - 1)) && (unsigned long) 42 < (1UL << 31)) ? 31 : -1) + (((unsigned long) 42 >= (1UL << (32 - 1)) && (unsigned long) 42 < (1UL << 32)) ? 32 : -1) ) )];
return 0;
}
It looks terrible and if evaluated at runtime it would be quite a lot of instructions. However, since all operands are constants (or more precisely literals), the compiler can evaluate this at compile time. It must do this because the array length declaration must be constant in C 89.
If you use the macro in other places that are not required for compile-time constants, it is up to the compiler whether or not it evaluates to an expression. However, you should expect any sane compiler to perform this rather rudimentary optimization called constant bending if optimization is enabled. When in doubt - as always - look at the generated assembly code.
For example, consider this program.
int
main()
{
return BITS_TO_REPRESENT(42);
}
An expression in an expression return
does not explicitly need to be a compile-time constant, so let's see what GCC code will generate. (I use a switch -S
to stop during the build phase.)
Even without any optimizations, I end up with the following assembly code which shows that the macro expansion has been faked into a constant of 6.
main:
.LFB0:
.cfi_startproc
pushq %rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq %rsp, %rbp
.cfi_def_cfa_register 6
movl $6, %eax # See the constant 6?
popq %rbp
.cfi_def_cfa 7, 8
ret
.cfi_endproc
source to share
A slightly shorter definition of a macro LOG
that works with integers up to 32 bits could be:
#define LOG_1(n) (((n) >= 2) ? 1 : 0)
#define LOG_2(n) (((n) >= 1<<2) ? (2 + LOG_1((n)>>2)) : LOG_1(n))
#define LOG_4(n) (((n) >= 1<<4) ? (4 + LOG_2((n)>>4)) : LOG_2(n))
#define LOG_8(n) (((n) >= 1<<8) ? (8 + LOG_4((n)>>8)) : LOG_4(n))
#define LOG(n) (((n) >= 1<<16) ? (16 + LOG_8((n)>>16)) : LOG_8(n))
However, before using it, check if you really need it. People often have to use the logarithm for values ββthat have cardinality 2. For example, when implementing bit arrays, or so on. While it is difficult to express LOG
as a constant expression, it is very easy to define the power of 2. Thus, you can define your constants as:
#define logA 4
#define A (1<<logA)
instead:
#define A 16
#define logA LOG(A)
source to share
The C preprocessor #define
is purely a text replacement mechanism. You won't be able to compute the log values ββat compile time.
You can probably use C ++ templates, but this is black magic that I don't understand and doesn't really matter at this time.
Or, as I mentioned in the comment below, you can create your own pre-processor that evaluates the array size equations before passing the updated code to the standard C compiler.
Edit
While I've already seen this question: Do any C or C ++ compilers optimize within definition macros?
This question is about evaluating this line of macros:
#include <math.h>
#define ROWS 15
#define COLS 16
#define COEFF 0.15
#define NODES (ROWS*COLS)
#define A_CONSTANT (COEFF*(sqrt(NODES)))
The consensus was what A_CONSTANT
could be a compile-time constant, depending on how smart the compiler is and what math functions are defined as intrinsics
. He also hinted that GCC is smart enough to figure it out for this case.
So the answer to your question can be found when you try it and see what kind of code you are actually building.
source to share
This answer is inspired by 5gon12eder , but with a simpler first macro. Unlike 5gon12eder , this implementation gives 0
for BITS_TO_REPRESENT(0)
, which is perhaps correct. This function BITS_TO_REPRESENT(N)
returns the number of bits to represent an unsigned integer less than or equal to a non-negative integer N
; N
it takes one more bit to store the signed number .
#define NEEDS_BIT(N, B) (((unsigned long)N >> B) > 0)
#define BITS_TO_REPRESENT(N) \
(NEEDS_BIT(N, 0) + NEEDS_BIT(N, 1) + \
NEEDS_BIT(N, 2) + NEEDS_BIT(N, 3) + \
NEEDS_BIT(N, 4) + NEEDS_BIT(N, 5) + \
NEEDS_BIT(N, 6) + NEEDS_BIT(N, 7) + \
NEEDS_BIT(N, 8) + NEEDS_BIT(N, 9) + \
NEEDS_BIT(N, 10) + NEEDS_BIT(N, 11) + \
NEEDS_BIT(N, 12) + NEEDS_BIT(N, 13) + \
NEEDS_BIT(N, 14) + NEEDS_BIT(N, 15) + \
NEEDS_BIT(N, 16) + NEEDS_BIT(N, 17) + \
NEEDS_BIT(N, 18) + NEEDS_BIT(N, 19) + \
NEEDS_BIT(N, 20) + NEEDS_BIT(N, 21) + \
NEEDS_BIT(N, 22) + NEEDS_BIT(N, 23) + \
NEEDS_BIT(N, 24) + NEEDS_BIT(N, 25) + \
NEEDS_BIT(N, 26) + NEEDS_BIT(N, 27) + \
NEEDS_BIT(N, 28) + NEEDS_BIT(N, 29) + \
NEEDS_BIT(N, 30) + NEEDS_BIT(N, 31) \
)
BITS_TO_REPRESENT
almost logarithm of base-2. Because the default floating point to integer conversion is truncation, the integer version of the base-2 logarithm is floating point floor(log(N)/log(2))
. BITS_TO_REPRESENT(N)
returns a value greater than floor(log(N)/log(2))
.
For example:
-
BITS_TO_REPRESENT(7)
3
, whereasfloor(log(7)/log(2))
-2
. -
BITS_TO_REPRESENT(8)
-4
, whereasfloor(log(8)/log(2))
-3
.
source to share