Analysis of the coverage of numba-wrapped functions

I wrote a python module , most of which is wrapped in decorators @numba.jit

for speed. I also wrote a lot of tests for this module, which I run ( on Travis-CI ) with py.test

. Now I'm trying to look at the coverage of these tests using pytest-cov

which is just a plugin that relies on coverage

(with the hope of integrating it all into coveralls ).

Unfortunately, it seems that when used numba.jit

for all these functions coverage

, it is considered that the functions are never used, which is kind of the case. So I mostly get no messages about my tests. This is not a big surprise as it numba

takes this code and compiles it, so the code itself is never really used. But I was hoping there would be some kind of magic that you see with python several times ...

Is there any useful way to combine these two excellent tools? Otherwise, is there another tool I could use to measure coverage with numba?

[I've made a minimal working example showing the difference here .)

+3


source to share


2 answers


Your best bet would be to disable numba JIT while measuring coverage. It depends on whether you trust the communication between Python code and JIT code, but you need to trust that anyway.



+3


source


Not that this answers the question, but I thought I should advertise a different way that someone might be interested in working. Maybe something really pretty that can be done with llvm-cov

. Presumably this should have been implemented in numba and the llvm code would have to be instrumental, which would require some kind of flag. But since numba is aware of the correspondence between lines of python code and llvm code, there must be something that can be implemented by someone smarter than me.



0


source







All Articles