Sure it's surprising, but what's the alternative?

Armin Ronacher (aka @mitsuhiko) did a really nice job of explaining some of the behaviours of Python that are often confusing to developers coming from other languages.

However, in that article, he also commented on some of the behaviours of Python that he still considers surprising and questions whether or not he would retain them in the hypothetical event of designing a new Python-inspired language that had the chance to do something different.

My perspective on two of the behaviours he listed is that they're items that are affected by fundamental underlying concepts that we really don't explain well even to current Python users. This reply is intended to be a small step towards correcting that. I mostly agree with him on the third, but I really don't know what could be done as an alternative.

The dual life of "."


Addressing the case where I mostly agree with Armin first, the period has two main uses in Python: as part of floating point and complex number literals and as the identifier separator for attribute access. These two uses collide when it comes to accessing attributes directly on integer literals. Historically that wasn't an issue, since integers didn't really have any interesting attributes anyway.

However, with the addition of the "bit_length" method and the introduction of the standardised numeric tower (and the associated methods and attributes inherited from the Complex and Rational ABCs), integers now have some public attributes in addition to the special method implementations they have always provided. That means we'll sometimes see things like:
1000000 .bit_length()
to avoid this confusing error:
>>> 1000000.bit_length()
File "stdin", line 1
1000000.bit_length()
^
SyntaxError: invalid syntax
This could definitely be avoided by abandoning Guido's rule that parsing Python shall not require anything more sophisticated than an LL(1) parser and require the parser to backtrack when float parsing fails and reinterpret the operation as attribute access instead. (That said, looking at the token stream, I'm now wondering if it may even be possible to fix this within the constraints of LL(1) - the tokenizer emits two tokens for "1.bit_length", but only one for something like "1.e16". I'm not sure the concept can be expressed in the Grammar in a way that the CPython parser generator would understand, though)

Decorators vs decorator factories


This is the simpler case of the two where I think we have a documentation and education problem rather than a fundamental design flaw, and it stems largely from a bit of careless terminology: the word "decorator" is used widely to refer not only to actual decorators but also to decorator factories. Decorator expressions (the bit after a "@" on a line preceding a function header) are required to produce callables that accept a single argument (typically the function being defined) and return a result that will be bound to the name used in the function header line (usually either the original function or else some kind of wrapper around it). These expressions typically take one of two forms: they either reference an actual decorator by name, or else they will call a decorator factory to create an appropriate decorator at function definition time.

And that's where the sloppy terminology catches up with us: because we've loosely used the term "decorator" for both actual decorators and decorator factories since the early days of PEP 318, decorator implementators get surprised at how difficult the transition can be from "simple decorator" to "decorator with arguments". In reality, it is just as hard as any transition from providing a single instance of an object to instead providing a factory function that creates such instances, but the loose terminology obscures that.

I actually find this case to be somewhat analogous to the case of first class functions. Many developers coming to Python from languages with implicit call semantics (i.e. parentheses optional) get frustrated by the fact that Python demands they always supply the (to them) redundant parentheses. Of course, experienced Python programmers know that, due to the first class nature of functions in Python, "f" just refers to the function itself and "f()" is needed to actually call it.

The situation with decorator factories is similar. @classmethod is an actual decorator, so no parentheses are needed and we can just refer to it directly. Something like @functools.wraps on the other hand, is a decorator factory, so we need to call it if we want it to create a real decorator for us to use.

Evaluating default parameters at function definition time


This is another case where I think we have an underlying education and documentation problem and the confusion over mutable default arguments is just a symptom of that. To make this one extra special, it lies at the intersection of two basic points of confusion, only one of which is well publicised.

The immutable vs mutable confusion is well documented (and, indeed, Armin pointed it out in his article in the context of handling of ordinary function arguments) so I'm not going to repeat it here. The second, less well documented point of confusion is the lack of a clear explanation in the official documentation of the differences between compilation time (syntax checks), function definition time (decorators, default argument evaluation) and function execution time (argument binding, execution of function body). (Generators actually split that last part up even further)

However, Armin clearly understands both of those distinctions, so I can't chalk the objection in his particular case up to that explanation. Instead, I'm going to consider the question of "Well, what if Python didn't work that way?".

If default arguments aren't evaluated at definition time in the scope defining the function, what are the alternatives? The only alternative that readily presents itself is to keep the code objects around as distinct closures. As a point of history, Python had default arguments long before it had closures, so that provides a very practical reason why deferred evaluation of default argument expressions really wasn't an option. However, this is a hypothetical discussion, so we'll continue.

Now we get to the first serious objection: the performance hit. Instead of just moving a few object references around, in the absence of some fancy optimisation in the compiler, deferred evaluation of even basic default arguments like "x=1, y=2" is going to require multiple function calls to actually run the code in the closures. That may be feasible if you've got a sophisticated toolchain like PyPy backing you up but is a serious concern for simpler toolchains. Evaluating some expressions and stashing the results on the function object? That's easy. Delaying the evaluation and redoing it every time it's needed? Probably not too hard (as long as closures are already available). Making the latter as fast as the former for the simple, common cases (like immutable constants)? Damn hard.

But, let's further suppose we've done the work to handle the cases that allow constant folding nicely and we still cache those on the function object so we're not getting a big speed hit. What happens to our name lookup semantics from default argument expressions when we have deferred evaluation? Why, we get closure semantics of course, and those are simple and natural and never confused anybody, right? (if you believe I actually mean that, I have an Opera House to sell you...)

While Python's default argument handling is the way it is at least partially due to history (i.e. the lack of closures when it was implemented meant that storing the expressions results on the function object was the only viable way to do it), the additional runtime overhead and complexity in the implementation and semantics involved in delaying the evaluation makes me suspect that going the runtime evaluation path wouldn't necessarily be the clear win that Armin suggests it would be.

Comments

Comments powered by Disqus