What is the time complexity of a function lookup operation in Python

I am wondering, since one common optimization strategy is to "cache" the search within a variable and then call a method / function using that variable, how expensive is the search action?

This is what I mean by "caching" the search in case it is not the correct term:

class TestClass:

    def myMethod(self):
       printMethod = self.printMethod
       for i in range(0, 1000):
          printMethod(i)

    def printMethod(self, i):
       print i

      

+3


source to share


3 answers


Savings do not come in time, they come in real time. Finding a function name in a namespace is just looking for a key in a dictionary, which is already O (1). Finding an attribute on an object is also a dict lookup, which again is O (1). There is an optimized opcode to look up local variables by name, but this still can't be faster than O (1).

In your example, the search self.printMethod

looks for local ( self

) and then attribute ( printMethod

). These are two searches. If you store it locally, then each subsequent access to local variables printMethod

is just one lookup instead of two. It's still O (1), but it's faster because it's a smaller constant.



This question discusses how the name lookup functions work in Python.

+6


source


Here's some code you can use to make the difference:

http://pastebin.com/svBN5NZ9

And some temporary results:



In [2]: %timeit Class1().runCached(10000)
1000 loops, best of 3: 1.74 ms per loop

In [3]: %timeit Class1().runNormal(10000)
100 loops, best of 3: 2.92 ms per loop

In [4]: %timeit Class10().runCached(10000)
1000 loops, best of 3: 1.7 ms per loop

In [5]: %timeit Class10().runNormal(10000)
100 loops, best of 3: 6.01 ms per loop

In [6]: %timeit Class100().runCached(10000)
1000 loops, best of 3: 1.7 ms per loop

In [7]: %timeit Class100().runNormal(10000)
10 loops, best of 3: 42.9 ms per loop

      

Thus, in general, method caching is faster, and the method lookup time depends on the depth of the class's inheritance hierarchy.

But note that you may get different results if you use JIT tracing like pypy, as the trace can effectively cache the method pointer for you.

+4


source


Two O (1) operations can have different times. Manipulation of instance attributes ( self.printMethod

) and local variables is O (1), but local variable access is optimized, so no dictionary lookup is required, so it's faster. Look at the bytecode for accessing a local variable vs an instance variable in CPython:

>>> import dis
>>> class MyClass:
...   def printMethod(self):
...     pass
...   def code(self):
...     pm = self.printMethod
...     self.printMethod()
...     pm()
...
>>> dis.dis(MyClass.code)
  5           0 LOAD_FAST                0 (self)
              3 LOAD_ATTR                0 (printMethod)
              6 STORE_FAST               1 (pm)

  6           9 LOAD_FAST                0 (self)
             12 LOAD_ATTR                0 (printMethod)
             15 CALL_FUNCTION            0
             18 POP_TOP

  7          19 LOAD_FAST                1 (pm)
             22 CALL_FUNCTION            0
             25 POP_TOP
             26 LOAD_CONST               0 (None)
             29 RETURN_VALUE
>>>

      

You can see that when accessing pm

, a simple operation is performed LOAD_FAST

that loads a value from a fixed numeric value on the local stack, whereas accessing self.printMethod

requires an additional operation LOAD_ATTR

.

Of course, it takes time to set the value of a local variable, so it needs to be used multiple times (as in the example code) to see any performance benefit.

As @ user5402 points out, your mileage may vary depending on the implementation you are using due to optimization by the compiler.

+3


source







All Articles