What tests can I run to make sure I have overridden __setattr__ correctly?

I've been learning Python for a while, and I realized that overriding __setattr__

correctly can be frustrating ( to say the least !).

What are some efficient ways to ensure / prove that the override is done correctly? I am especially concerned that the override remains compatible with the descriptor protocol and MRO.

(tagged that I'm using Python 3.x, but the question certainly applies to other versions as well.)

Sample code where "override" demonstrates the default behavior (but how can I prove it?):

class MyClass():
    def __setattr__(self,att,val):
        print("I am exhibiting default behavior!")
        super().__setattr__(att,val)

      

A thoughtful example in which the override violates the descriptor protocol (the store instance is looked up before the descriptor is found, but how can I check it?):

class MyClass():
    def __init__(self,mydict):
        self.__dict__['mydict'] = mydict
    @property
    def mydict(self):
        return self._mydict
    def __setattr__(self,att,val):
        if att in self.mydict:
            self.mydict[att] = val
        else:
            super().__setattr__(att, val)

      

EDIT: The perfect answer would be a generic test that will succeed if __setattr__

overridden correctly and won't work otherwise.

+3


source to share


2 answers


In this case there is a simple solution: add a binding descriptor with the name that is in mydict

, and check that the assignment of that name goes through the descriptor (NB: Python 2.x code, I don't have Python 3 install here):



class MyBindingDescriptor(object):
    def __init__(self, key):
        self.key = key

    def __get__(self, obj, cls=None):
        if not obj:
            return self
        return obj.__dict__[self.key]

    def __set__(self, obj, value):
        obj.__dict__[self.key] = value


sentinel = object()

class MyClass(object):
    test = MyBindingDescriptor("test")

    def __init__(self, mydict):
        self.__dict__['mydict'] = mydict
        self.__dict__["test"] = sentinel

    def __setattr__(self, att, val):
        if att in self.mydict:
            self.mydict[att] = val
        else:
            super(MyClass, self).__setattr__(att, val)


# first test our binding descriptor
instance1 = MyClass({})
# sanity check 
assert instance1.test is sentinel, "instance1.test should be sentinel, got '%s' instead" % instance1.test

# this one should pass ok
instance1.test = NotImplemented
assert instance1.test is NotImplemented, "instance1.test should be NotImplemented, got '%s' instead" % instance1.test

# now demonstrate that the current implementation is broken:
instance2 = MyClass({"test":42})
instance2.test = NotImplemented
assert instance2.test is NotImplemented, "instance2.test should be NotImplemented, got '%s' instead" % instance2.test

      

+3


source


If you define the override __setattr__

correctly as a call to the __setattr__

parent class, then you can move your method into a class hierarchy that defines your own __setattr__

:

def inject_tester_class(cls):
    def __setattr__(self, name, value):
        self._TesterClass__setattr_args.append((name, value))
        super(intermediate, self).__setattr__(name, value)
    def assertSetAttrDelegatedFor(self, name, value):
        assert \
            [args for args in self._TesterClass__setattr_args if args == (name, value)], \
            '__setattr__(name, value) was never delegated'
    body = {
        '__setattr__': __setattr__,
        'assertSetAttrDelegatedFor': assertSetAttrDelegatedFor,
        '_TesterClass__setattr_args': []
    }

    intermediate = type('TesterClass', cls.__bases__, body)
    testclass = type(cls.__name__, (intermediate,), vars(cls).copy())

    # rebind the __class__ closure
    def closure():
        testclass
    osa = testclass.__setattr__
    new_closure = tuple(closure.__closure__[0] if n == '__class__' else c
                        for n, c in zip(osa.__code__.co_freevars, osa.__closure__))
    testclass.__setattr__ = type(osa)(
        osa.__code__, osa.__globals__, osa.__name__,
        osa.__defaults__, new_closure)

    return testclass

      

This function jumps over several hoops to insert an intermediate class that will intercept any properly delegated call __setattr__

. It will work even if you have no base classes other than the standard one object

(which would not allow us to replace it __setattr__

with an easier way to test this).

This makes the assumption that you are using super().__setattr__()

for delegation, where you used super()

no arguments. It also assumes that the meta class is not involved.



Additional __setattr__

is introduced in a manner consistent with the existing MRO; an additional intermediate class is injected between the original class and the rest of the MRO and then dispatches the call __setattr__

.

To use this in your test, you will create a new class with the above function, instantiate and set the attributes on that instance:

MyTestClass = inject_tester_class(MyClass)
my_test_instance = MyTestClass()
my_test_instance.foo = 'bar'
my_test_instance.assertSetAttrDelegatedFor('foo', 'bar')

      

If the parameter is foo

not delegated, an exception is thrown AssertionError

, which the test runner is unittest

logged as a test error.

+2


source







All Articles