Seven billion seconds per second

A couple of years ago, YouTube put together their "One hour per second" site, visualising the fact that for every second of time that elapses, an hour of video is uploaded to YouTube. Their current statistics page indicates that figure is now up to 100 hours per minute (about 1.7 hours per second).

Impressive numbers to be sure. However, there's another set of numbers I personally consider significantly more impressive: every second, more than seven billion seconds are added to the tally of collective human existence on Earth.

Think about that for a moment.

Tick. Another 7 billion seconds of collective human existence.

Tick. Another 117 million minutes of collective human existence.

Tick. Another 2 million hours of collective human existence.

Tick. Another 81 thousand days of collective human existence.

Tick. Another 11 thousand weeks of collective human existence.

Tick. Another 222 years of collective human existence.

222 years of collective human experience, every single second, of every single day. And as the world population grows, it's only going to get faster.

222 years of collective human experience per second.

13 centuries per minute.

801 centuries per hour.

19 millenia per day.

135 millenia per week.

7 billion years per year.

The growth in our collective human experience over the course of a single year would stretch halfway back to the dawn of time if it was experienced by an individual.

We currently squander most of that potential. We allow a lot of it to be wasted scrabbling for the basic means of survival like food, clean water and shelter. We lock knowledge up behind closed doors, forcing people to reinvent solutions to already solved problems because they can't afford the entry fee.

We ascribe value to people based solely on their success in the resource acquisition game that is the market economy, without acknowledging the large degree to which sheer random chance is frequently the determinant in who wins and who loses.

We inflict bile and hate on people who have the temerity to say "I'm here, I'm human, and I have a right to be heard", while being different from us. We often focus on those superficial differences, rather than our underlying common humanity.

We fight turf wars based on where we were born, the colour of our skin, and which supernatural beings or economic doctrines we allow to guide our actions.

Is it possible to change this? Is it possible to build a world where we consider people to have inherent value just because they're fellow humans, rather than because of specific things they have done, or specific roles they take up?

I honestly don't know, but it seems worthwhile to try. I certainly find it hard to conceive of a better possible way to spend my own meagre slice of those seven billion seconds per second :)

The transition to multilingual programming

A recent thread on python-dev prompted me to summarise the current state of the ongoing industry wide transition from bilingual to multilingual programming as it relates to Python's cross-platform support. It also relates to the reasons why Python 3 turned out to be more disruptive than the core development team initially expected.

A good starting point for anyone interested in exploring this topic further is the "Origin and development" section of the Wikipedia article on Unicode, but I'll hit the key points below.

Monolingual computing

At their core, computers only understand single bits. Everything above that is based on conventions that ascribe higher level meanings to particular sequences of bits. One particular important set of conventions for communicating between humans and computers are "text encodings": conventions that map particular sequences of bits to text in the actual languages humans read and write.

One of the oldest encodings still in common use is ASCII (which stands for "American Standard Code for Information Interchange"), developed during the 1960's (it just had its 50th birthday in 2013). This encoding maps the letters of the English alphabet (in both upper and lower case), the decimal digits, various punctuation characters and some additional "control codes" to the 128 numbers that can be encoded as a 7-bit sequence.

Many computer systems today still only work correctly with English - when you encounter such a system, it's a fairly good bet that either the system itself, or something it depends on, is limited to working with ASCII text. (If you're really unlucky, you might even get to work with modal 5-bit encodings like ITA-2, as I have. The legacy of the telegraph lives on!)

Working with local languages

The first attempts at dealing with this limitation of ASCII simply assigned meanings to the full range of 8-bit sequences. Known collectively as "Extended ASCII", each of these systems allowed for an additional 128 characters, which was enough to handle many European and Cyrillic scripts. Even 256 characters was nowhere near sufficient to deal with Indic or East Asian languages, however, so this time also saw a proliferation of ASCII incompatible encodings like ShiftJIS, ISO-2022 and Big5. This is why Python ships with support for dozens of codecs from around the world.

This proliferation of encodings required a way to tell software which encoding should be used to read the data. For protocols that were originally designed for communication between computers, agreeing on a common text encoding is usually handled as part of the protocol. In cases where no encoding information is supplied (or to handle cases where there is a mismatch between the claimed encoding and the actual encoding), then applications may make use of "encoding detection" algorithms, like those provided by the chardet package for Python. These algorithms aren't perfect, but can give good answers when given a sufficient amount of data to work with.

Local operating system interfaces, however, are a different story. Not only don't they inherently convey encoding information, but the nature of the problem is such that trying to use encoding detection isn't practical. Two key systems arose in an attempt to deal with this problem:

  • Windows code pages
  • POSIX locale encodings

With both of these systems, a program would pick a code page or locale, and use the corresponding text encoding to decide how to interpret text for display to the user or combination with other text. This may include deciding how to display information about the contents of the computer itself (like listing the files in a directory).

The fundamental premise of these two systems is that the computer only needs to speak the language of its immediate users. So, while the computer is theoretically capable of communicating in any language, it can effectively only communicate with humans in one language at a time. All of the data a given application was working with would need to be in a consistent encoding, or the result would be uninterpretable nonsense, something the Japanese (and eventually everyone else) came to call mojibake.

It isn't a coincidence that the name for this concept came from an Asian country: the encoding problems encountered there make the issues encountered with European and Cyrillic languages look trivial by comparison.

Unfortunately, this "bilingual computing" approach (so called because the computer could generally handle English in addition to the local language) causes some serious problems once you consider communicating between computers. While some of those problems were specific to network protocols, there are some more serious ones that arise when dealing with nominally "local" interfaces:

  • networked computing meant one username might be used across multiple systems, including different operating systems
  • network drives allow a single file server to be accessed from multiple clients, including different operating systems
  • portable media (like DVDs and USB keys) allow the same filesystem to be accessed from multiple devices at different points in time
  • data synchronisation services like Dropbox need to faithfully replicate a filesystem hierarchy not only across different desktop environments, but also to mobile devices

For these protocols that were originally designed only for local interoperability communicating encoding information is generally difficult, and it doesn't necessarily match the claimed encoding of the platform you're running on.

Unicode and the rise of multilingual computing

The path to addressing the fundamental limitations of bilingual computing actually started more than 25 years ago, back in the late 1980's. An initial draft proposal for a 16-bit "universal encoding" was released in 1988, the Unicode Consortium was formed in early 1991 and the first volume of the first version of Unicode was published later that same year.

Microsoft added new text handling and operating system APIs to Windows based on the 16-bit C level wchar_t type, and Sun also adopted Unicode as part of the core design of Java's approach to handling text.

However, there was a problem. The original Unicode design had decided that "16 bits ought to be enough for anybody" by restricting their target to only modern scripts, and only frequently used characters within those scripts. However, when you look at the "rarely used" Kanji and Han characters for Japanese and Chinese, you find that they include many characters that are regularly used for the names of people and places - they're just largely restricted to proper nouns, and so won't show up in a normal vocabulary search. So Unicode 2.0 was defined in 1996, expanding the system out to a maximum of 21 bits per code point (using up to 32 bits per code point for storage).

As a result, Windows (including the CLR) and Java now use the little-endian variant of UTF-16 to allow their text APIs to handle arbitrary Unicode code points. The original 16-bit code space is now referred to as the Basic Multilingual Plane.

While all that was going on, the POSIX world ended up adopting a different strategy for migrating to full Unicode support: attempting to standardise on the ASCII compatible UTF-8 text encoding.

The choice between using UTF-8 and UTF-16-LE as the preferred local text encoding involves some complicated trade-offs, and that's reflected in the fact that they have ended up being at the heart of two competing approaches to multilingual computing.

Choosing UTF-8 aims to treat formatting text for communication with the user as "just a display issue". It's a low impact design that will "just work" for a lot of software, but it comes at a price:

  • because encoding consistency checks are mostly avoided, data in different encodings may be freely concatenated and passed on to other applications. Such data is typically not usable by the receiving application.
  • for interfaces without encoding information available, it is often necessary to assume an appropriate encoding in order to display information to the user, or to transform it to a different encoding for communication with another system that may not share the local system's encoding assumptions. These assumptions may not be correct, but won't necessarily cause an error - the data may just be silently misinterpreted as something other than what was originally intended.
  • because data is generally decoded far from where it was introduced, it can be difficult to discover the origin of encoding errors.
  • as a variable width encoding, it is more difficult to develop efficient string manipulation algorithms for UTF-8. Algorithms originally designed for fixed width encodings will no longer work.
  • as a specific instance of the previous point, it isn't possible to split UTF-8 encoded text at arbitrary locations. Care needs to be taken to ensure splits only occur at code point boundaries.

UTF-16-LE shares the last two problem, but to a lesser degree (simply due to the fact most commonly used code points are in the 16-bit Basic Multilingual Plane). However, because it isn't generally suitable for use in network protocols and file formats (without significant additional encoding markers), the explicit decoding and encoding required encourages designs with a clear separation between binary data (including encoded text) and decoded text data.

Through the lens of Python

Python and Unicode were born on opposites side of the Atlantic ocean at roughly the same time (1991). The growing adoption of Unicode within the computing industry has had a profound impact on the evolution of the language.

Python 1.x was purely a product of the bilingual computing era - it had no support for Unicode based text handling at all, and was hence largely limited to 8-bit ASCII compatible encodings for text processing.

Python 2.x was still primarily a product of the bilingual era, but added multilingual support as an optional addon, in the form of the unicode type and support for a wide variety of text encodings. PEP 100 goes into the many technical details that needed to be covered in order to incorporate that feature. With Python 2, you can make multilingual programming work, but it requires an active decision on the part of the application developer, or at least that they follow the guidelines of a framework that handles the problem on their behalf.

By contrast, Python 3.x is designed to be a native denizen of the multilingual computing world. Support for multiple languages extends as far as the variable naming system, such that languages other than English become almost as well supported as English already was in Python 2. While the English inspired keywords and the English naming in the standard library and on the Python Package Index mean that Python's "native" language and the preferred language for global collaboration will always be English, the new design allows a lot more flexibility when working with data in other languages.

Consider processing a data table where the headings are names of Japanese individuals, and we'd like to use collections.namedtuple to process each row. Python 2 simply can't handle this task:

>>> from collections import namedtuple
>>> People = namedtuple("People", u"陽斗 慶子 七海")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib64/python2.7/collections.py", line 310, in namedtuple
    field_names = map(str, field_names)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-1: ordinal not in range(128)

Users need to either restrict themselves to dictionary style lookups rather than attribute access, or else used romanised versions of their names (Haruto, Keiko, Nanami for the example). However, the case of "Haruto" is an interesting one, as there at least 3 different ways of writing that as Kanji (陽斗, 陽翔, 大翔), but they are all romanised as the same string (Haruto). If you try to use romaaji to handle a data set that contains more than one variant of that name, you're going to get spurious collisions.

Python 3 takes a very different perspective on this problem. It says it should just work, and it makes sure it does:

>>> from collections import namedtuple
>>> People = namedtuple("People", u"陽斗 慶子 七海")
>>> d = People(1, 2, 3)
>>> d.陽斗
1
>>> d.慶子
2
>>> d.七海
3

This change greatly expands the kinds of "data driven" use cases Python can support in areas where the ASCII based assumptions of Python 2 would cause serious problems.

Python 3 still needs to deal with improperly encoded data however, so it provides a mechanism for arbitrary binary data to be "smuggled" through text strings in the Unicode Private Use Area. This feature was added by PEP 383 and is managed through the surrogateescape error handler, which is used by default on most operating system interfaces. This recreates the old Python 2 behaviour of passing improperly encoded data through unchanged when dealing solely with local operating system interfaces, but complaining when such improperly encoded data is injected into another interface. The codec error handling system provides several tools to deal with these files, and we're looking at adding a few more relevant convenience functions for Python 3.5.

The underlying Unicode changes in Python 3 also made PEP 393 possible, which changed the way the CPython interpreter stores text internally. In Python 2, even pure ASCII strings would consume four bytes per code point on Linux systems. Using the "narrow build" option (as the Python 2 Windows builds from python.org do) reduced that the only two bytes per code point when operating within the Basic Multilingual Plane, but at the cost of potentially producing wrong answers when asked to operate on code points outside the Basic Multilingual Plane. By contrast, starting with Python 3.3, CPython now stores text internally using the smallest fixed width data unit possible. That is, latin-1 text uses 8 bits per code point, UCS-2 (Basic Multilingual Plane) text uses 16-bits per code point, and only text containing code points outside the Basic Multilingual Plane will expand to needing the full 32 bits per code point. This can not only significantly reduce the amount of memory needed for multilingual applications, but may also increase their speed as well (as reducing memory usage also reduces the time spent copying data around).

Are we there yet?

In a word, no. Not for Python 3.4, and not for the computing industry at large. We're much closer than we ever have been before, though. Most POSIX systems now default to UTF-8 as their default encoding, and many systems offer a C.UTF-8 locale as an alternative to the traditional ASCII based C locale. When dealing solely with properly encoded data and metadata, and properly configured systems, Python 3 should "just work", even when exchanging data between different platforms.

For Python 3, the remaining challenges fall into a few areas:

  • helping existing Python 2 users adopt the optional multilingual features that will prepare them for eventual migration to Python 3 (as well as reassuring those users that don't wish to migrate that Python 2 is still fully supported, and will remain so for at least the next several years, and potentially longer for customers of commercial redistributors)
  • adding back some features for working entirely in the binary domain that were removed in the original Python 3 transition due to an initial assessment that they were operations that only made sense on text data (PEP 361 summary: bytes.__mod__ is coming back in Python 3.5 as a valid binary domain operation, bytes.format stays gone as an operation that only makes sense when working with actual text data)
  • better handling of improperly decoded data, including poor encoding recommendations from the operating system (for example, Python 3.5 will be more sceptical when the operating system tells it the preferred encoding is ASCII and will enable the surrogateescape error handler on sys.stdout when it occurs)
  • eliminating most remaining usage of the legacy code page and locale encoding systems in the CPython interpreter (this most notably affects the Windows console interface and argument decoding on POSIX. While these aren't easy problems to solve, it will still hopefully be possible to address them for Python 3.5)

More broadly, each major platform has its own significant challenges to address:

  • for POSIX systems, there are still a lot of systems that don't use UTF-8 as the preferred encoding and the assumption of ASCII as the preferred encoding in the default C locale is positively archaic. There is also still a lot of POSIX software that still believes in the "text is just encoded bytes" assumption, and will happily produce mojibake that makes no sense to other applications or systems.
  • for Windows, keeping the old 8-bit APIs around was deemed necessary for backwards compatibility, but this also means that there is still a lot of Windows software that simply doesn't handle multilingual computing correctly.
  • for both Windows and the JVM, a fair amount of nominally multilingual software actually only works correctly with data in the basic multilingual plane. This is a smaller problem than not supporting multilingual computing at all, but was quite a noticeable problem in Python 2's own Windows support.

Mac OS X is the platform most tightly controlled by any one entity (Apple), and they're actually in the best position out of all of the current major platforms when it comes to handling multilingual computing correctly. They've been one of the major drivers of Unicode since the beginning (two of the authors of the initial Unicode proposal were Apple engineers), and were able to force the necessary configuration changes on all their systems, rather than having to work with an extensive network of OEM partners (Windows, commercial Linux vendors) or relatively loose collaborations of individuals and organisations (community Linux distributions).

Modern mobile platforms are generally in a better position than desktop operating systems, mostly by virtue of being newer, and hence defined after Unicode was better understood. However, the UTF-8 vs UTF-16-LE distinction for text handling exists even there, thanks to the Java inspired Dalvik VM in Android (plus the cloud-backed nature of modern smartphones means you're even more likely to be encounter files from multiple machines when working on a mobile device).

Why Python 4.0 won't be like Python 3.0

Newcomers to python-ideas occasionally make reference to the idea of "Python 4000" when proposing backwards incompatible changes that don't offer a clear migration path from currently legal Python 3 code. After all, we allowed that kind of change for Python 3.0, so why wouldn't we allow it for Python 4.0?

I've heard that question enough times now (including the more concerned phrasing "You made a big backwards compatibility break once, how do I know you won't do it again?"), that I figured I'd record my answer here, so I'd be able to refer people back to it in the future.

What are the current expectations for Python 4.0?

My current expectation is that Python 4.0 will merely be "the release that comes after Python 3.9". That's it. No profound changes to the language, no major backwards compatibility breaks - going from Python 3.9 to 4.0 should be as uneventful as going from Python 3.3 to 3.4 (or from 2.6 to 2.7). I even expect the stable Application Binary Interface (as first defined in PEP 384) to be preserved across the boundary.

At the current rate of language feature releases (roughly every 18 months), that means we would likely see Python 4.0 some time in 2023, rather than seeing Python 3.10.

Update: After this post was originally written back in 2014, subsequent discussions on the core python-dev mailing list led to the conclusion that the release after 3.9 will probably just be 3.10. However, a 4.0 will presumably still happen some day, and the premise of this article is expected to hold for that release: it will be held to the same backwards compatibility obligations as a Python 3.X to 3.X+1 update.

So how will Python continue to evolve?

First and foremost, nothing has changed about the Python Enhancement Proposal process - backwards compatible changes are still proposed all the time, with new modules (like asyncio) and language features (like yield from) being added to enhance the capabilities available to Python applications. As time goes by, Python 3 will continue to pull further ahead of Python 2 in terms of the capabilities it offers by default, even if Python 2 users have access to equivalent capabilities through third party modules or backports from Python 3.

Competing interpreter implementations and extensions will also continue to explore different ways of enhancing Python, including PyPy's exploration of JIT-compiler generation and software transactional memory, and the scientific and data analysis community's exploration of array oriented programming that takes full advantage of the vectorisation capabilities offered by modern CPUs and GPUs. Integration with other virtual machine runtimes (like the JVM and CLR) is also expected to improve with time, especially as the inroads Python is making in the education sector are likely to make it ever more popular as an embedded scripting language in larger applications running in those environments.

For backwards incompatible changes, PEP 387 provides a reasonable overview of the approach that was used for years in the Python 2 series, and still applies today: if a feature is identified as being excessively problematic, then it may be deprecated and eventually removed.

However, a number of other changes have been made to the development and release process that make it less likely that such deprecations will be needed within the Python 3 series:

  • the greater emphasis on the Python Package Index, as indicated by the collaboration between the CPython core development team and the Python Packaging Authority, as well as the bundling of the pip installer with Python 3.4+, reduces the pressure to add modules to the standard library before they're sufficiently stable to accommodate the relatively slow language update cycle
  • the "provisional API" concept (introduced in PEP 411) makes it possible to apply a "settling in" period to libraries and APIs that are judged likely to benefit from broader feedback before offering the standard backwards compatibility guarantees
  • a lot of accumulated legacy behaviour really was cleared out in the Python 3 transition, and the requirements for new additions to Python and the standard library are much stricter now than they were in the Python 1.x and Python 2.x days
  • the widespread development of "single source" Python 2/3 libraries and frameworks strongly encourages the use of "documented deprecation" in Python 3, even when features are replaced with newer, preferred, alternatives. In these cases, a deprecation notice is placed in the documentation, suggesting the approach that is preferred for new code, but no programmatic deprecation warning is added. This allows existing code, including code supporting both Python 2 and Python 3, to be left unchanged (at the expense of new users potentially having slightly more to learn when tasked with maintaining existing code bases).

From (mostly) English to all written languages

It's also worth noting that Python 3 wasn't expected to be as disruptive as it turned out to be. Of all the backwards incompatible changes in Python 3, many of the serious barriers to migration can be laid at the feet of one little bullet point in PEP 3100:

  • Make all strings be Unicode, and have a separate bytes() type. The new string type will be called 'str'.

PEP 3100 was the home for Python 3 changes that were considered sufficiently non-controversial that no separate PEP was considered necessary. The reason this particular change was considered non-controversial was because our experience with Python 2 had shown that the authors of web and GUI frameworks were right: dealing sensibly with Unicode as an application developer means ensuring all text data is converted from binary as close to the system boundary as possible, manipulated as text, and then converted back to binary for output purposes.

Unfortunately, Python 2 doesn't encourage developers to write programs that way - it blurs the boundaries between binary data and text extensively, and makes it difficult for developers to keep the two separate in their heads, let alone in their code. So web and GUI framework authors have to tell their Python 2 users "always use Unicode text. If you don't, you may suffer from obscure and hard to track down bugs when dealing with Unicode input".

Python 3 is different: it imposes a much greater separation between the "binary domain" and the "text domain", making it easier to write normal application code, while making it a bit harder to write code that works with system boundaries where the distinction between binary and text data can be substantially less clear. I've written in more detail elsewhere regarding what actually changed in the text model between Python 2 and Python 3.

This revolution in Python's Unicode support is taking place against a larger background migration of computational text manipulation from the English-only ASCII (officially defined in 1963), through the complexity of the "binary data + encoding declaration" model (including the C/POSIX locale and Windows code page systems introduced in the late 1980's) and the initial 16-bit only version of the Unicode standard (released in 1991) to the relatively comprehensive modern Unicode code point system (first defined in 1996, with new major updates released every few years).

Why mention this point? Because this switch to "Unicode by default" is the most disruptive of the backwards incompatible changes in Python 3 and unlike the others (which were more language specific), it is one small part of a much larger industry wide change in how text data is represented and manipulated. With the language specific issues cleared out by the Python 3 transition, a much higher barrier to entry for new language features compared to the early days of Python and no other industry wide migrations on the scale of switching from "binary data with an encoding" to Unicode for text modelling currently in progress, I can't see any kind of change coming up that would require a Python 3 style backwards compatibility break and parallel support period. Instead, I expect we'll be able to accommodate any future language evolution within the normal change management processes, and any proposal that can't be handled that way will just get rejected as imposing an unacceptably high cost on the community and the core development team.

Some Suggestions for Teaching Python

I recently had the chance to attend a Software Carpentry bootcamp at the University of Queensland (as a teaching assistant), as well as seeing a presentation from one of UQ's tutors at PyCon Australia 2014.

While many of the issues they encountered were inherent in the complexity of teaching programming, a few seemed like things that could be avoided.

Getting floating point results from integer division

In Python 2, integer division copies C in truncating the answer by default:

    $ python -c "print(3/4)"
    0

Promoting to floating point requires type coercion, a command line flag or a future import:

    $ python -c "print(float(3)/4)"
    0.75
    $ python -Qnew -c "print(3/4)"
    0.75
    $ python -c "from __future__ import division; print(3/4)"
    0.75

Python 3 just does the right thing by default, so one way to avoid the problem entirely is to teach Python 3 instead of Python 2:

    $ python3 -c "print(3/4)"
    0.75

(In both Python 2 and 3, the // floor division operator explicitly requests truncating division when it is desired)

Common Python 2/3 syntax for printing values

I've been using Python 2 and 3 in parallel for more than 8 years now (while Python 3.0 was released in 2008, the project started in earnest a couple of years earlier than that, while Python 2.5 was still in development).

One essential trick I have learned in order to make regularly switching back and forth feasible is to limit myself to the common print syntax that works the same in both versions: passing a single argument surrounded by parentheses.

$ python -c 'print("Hello world!")'
Hello world!
$ python3 -c 'print("Hello world!")'
Hello world!

If I need to pass multiple arguments, I'll use string formatting, rather than the implicit concatenation feature.

$ python -c 'print("{} {}{}".format("Hello", "world", "!"))'
Hello world!
$ python3 -c 'print("{} {}{}".format("Hello", "world", "!"))'
Hello world!

Rather than doing this, the Software Carpentry material that was used at the bootcamp I attended used the legacy Python 2 only print syntax extensively, causing examples that otherwise would have worked fine in either version to fail for students that happened to be running Python 3. Adopting the shared syntax for printing values could be enough to make the course largely version independent.

Distinguishing between returning and printing values

One problem noted both at the bootcamp and by presenters at PyCon Australia was the challenge of teaching students the difference between printing and returning values. The problem is the "Print" part of the Read-Eval-Print-Loop provided by Python's interactive interpreter:

>>> def print_arg(x):
...     print(x)
...
>>> def return_arg(x):
...     return x
...
>>> print_arg(10)
10
>>> return_arg(10)
10

There's no obvious difference in output at the interactive prompt, especially for types like numbers where the results of str and repr are the same. Even when they're different, those differences may not be obvious to a student:

>>> print_arg("Hello world")
Hello world
>>> return_arg("Hello world")
'Hello world'

While I don't have a definitive answer for this one, an experiment that seems worth trying to me is to teach students how to replace sys.displayhook. In particular, I suggest demonstrating the following change, and seeing if it helps explain the difference between printing output for display to the user and returning values for further processing:

>>> def new_displayhook(obj):
...     if obj is not None:
...         print("-> {!r}".format(obj))
...
>>> import sys
>>> sys.displayhook = new_displayhook
>>> print_arg(10)
10
>>> return_arg(10)
-> 10

Understanding the difference between printing and returning is essential to learning to use functions effectively, and tweaking the display of results this way may help make the difference more obvious.

Addendum: IPython (including IPython Notebook)

The initial examples above focused on the standard CPython runtime, include the default interactive interpreter. The IPython interactive interpreter, including the IPython Notebook, has a couple of interesting differences in behaviour that are relevant to the above comments.

Firstly, it does display return values and printed values differently, prefacing results with an output reference number:

In [1]: print 10
10

In [2]: 10
Out[2]: 10

Secondly, it has an optional "autocall" feature that allows a user to tell IPython to automatically add the missing parentheses to a function call if the user leaves them out:

$ ipython3 --autocall=1 -c "print 10"
-> print(10)
10

This is a general purpose feature that allows users to make their IPython sessions behave more like languages that don't have first class functions (most notably, IPython's autocall feature closely resembles MATLAB's "command syntax" notation for calling functions).

It also has the side effect that users that use IPython, have autocall enabled, and don't use any of the more esoteric quirks of the Python 2 print statement (like stream redirection or suppressing the trailing newline) may not even notice that print became an ordinary builtin in Python 3.

On Wielding Power

Making the usually implied disclaimer completely explicit on this one: the views expressed in this article are my own, and do not necessarily reflect the position of any organisations of which I am a member.

Power is an interesting thing, and something that, as a society at large (rather than the specialists that spend a lot of time thinking about it), we really don't spend enough time giving serious consideration to. Trust and fear, hope and despair, interwoven with the complex dynamics of interpersonal relationships.

The most obvious kind of power is based on fear: people listening when you tell them what to do, based on a fear of the consequences if they ignore you. Many corporations have traditionally operated on this model: do what you're told, or you'll be fired. "You might lose your job" then hangs as an implicit threat behind every interaction with your management chain, and a complex web of legal obligations and social safety nets has arisen (to a greater or lesser degree in different countries) to help manage the effectiveness of this threat and redress the power imbalance. Fear based power is also, ultimately, the kind of power embodied in the legal system.

That's not the only kind of power though, and this post is largely about another form of it: power based on trust.

Power based on trust

Fear based power can be transferred fairly effectively: disobeying a delegate can be punished as severely as disobeying the original authority and so it goes. Interpersonal considerations don't get much consideration in such environments - they're about getting the job done, without any real concern for the feelings of the people doing it.

The efficiency of that kind of centralised control degrades fairly quickly though - with everyone being in constant fear of punishment, a whole lot of effort ends up being expended on figuring out what the orders are, communicating the orders, ensuring the orders have been followed, requesting new orders when the situation changes, recording exactly what was done to implement the orders and ensuring that if anything goes wrong it was the original orders that were to blame rather than the people following them and so on and so forth. It's like a human body that has no local reflexes, but instead has to think through the idea of removing its hand from a hotplate as an act of deliberate will.

There's a different kind of power though, summed up well in this YouTube video. What that kind of power is based on is the idea that once people have their core survival needs met, there are three key motivators that often work better than money: autonomy, mastery and purpose. (Note: this is after core survival needs are met. If people are still stressed about food, shelter, their health and their personal relationships, then autonomy, mastery and purpose can go take a hike)

At its best, an environment based on autonomy, mastery and purpose is one of mutual trust and respect. The purpose of the overall organisation and its individual components is sufficiently well articulated that everyone involved understands their responsibilities and how their efforts contribute to the greater whole, individuals are given a high degree of autonomy in determining how best to meet their obligations, and are supported in the pursuit of the required mastery to fulfil those obligations as well as possible.

This is the kind of distributed trust that Silicon Valley tries to sum up in its "move fast and break things" motto, but fails miserably in doing so. The reason? Those last two words there: "break things". It's an incredibly technocratic view of the world, and one that leaves out the most important element of any situation: the people.

This is a key point many technologists miss: ultimately, technology doesn't matter. It is not an end unto itself - it is only ever a means to an end, and that end will almost always be something related to people (we humans are an egocentric bunch). When you "break things" you hurt people, directly or indirectly. Now, maybe those things needed to be broken (and a lot of them do). Maybe those things were already hurting people, and the change just shifts (and hopefully lessens) the burden. But the specific phrasing in the Silicon Valley motto is one of cavalier irresponsibility, of freedom from consequences. "Don't think about the people that may be hurt by your actions - just move fast and break things, that's what we do here!".

This is NOT OK.

Yes, it needs to be OK to break things, whether deliberately or by mistake. Without that, "autonomy" becomes a myth, and we are left with stagnation. However, there's a difference between doing so carelessly, without accounting for the impact on those that may be harmed by the chosen course of action, and doing so while taking full responsibility for the harm your actions may have caused .

And with that, it's time to shift topics a bit. I assure you they're actually related, which may become clearer further down.

What is a corporation?

The glib answer here would be "a toxic cesspool of humanity", and I'll grant that's a fair description of a lot of them (see the earlier observations regarding fear based power). I am a capitalist though (albeit one that is strongly in favour of redistributive tax systems), so I see more potential for good in them than many other folks do.

So I'm going to give my perspective on the way some of the non-toxic ones work when running smoothly, at least in regards to three roles: the Chief Financial Officer, the Chief Technology Officer and the Chief Executive Officer. (You may choose not to believe me when I say non-toxic corporations are a real thing, but I assure you, such companies do exist, as most people don't actually like working for toxic cesspools of humanity. It's just that fully avoiding the descent into toxicity as organisations grow is, as yet, an unsolved problem in society. Radical transparency does seem to help a lot, though. Something about the cleansing power of sunlight and competing centres of power...).

The non-toxic CFO role is pretty straightforward: their job is to make sure that everyone gets paid, and the company not only survives, but thrives.

The non-toxic CTO role is also pretty straightforward: they're the ultimate authority on the technological foundations of an organisation. What's on the horizon that they need to be aware of? What's growing old and needs to be phased out in favour of something more recent? What just needs a bit of additional investment to bring it up to scratch?

The role of a CEO is a lot less clear. "Finance" is pretty clear, as is "Technology". But what does "Executive" mean? They're not just in charge of the executives - they're in charge of the whole company.

My take on it? The CEO is ultimately the "keeper of the company culture". They ultimately decide not only what gets done, but also how it gets done. While they have a lot of other important responsibilities, a key one in my mind is that it is the CEO's job to make sure that both the CFO and CTO remember to account for the people that will ultimately be tasked with handling "execution". They're the ones that say "no, we're not cutting that, it's important to the way we operate - we need to find another way to save money" (remember, we're only talking about the non-toxic corporations here).

So when employees of a corporation expect that company to do the right thing by them? They're trusting the CEO. Not the CFO. Not the CTO. The CEO. Arguably the key defining characteristic of a non-toxic corporation is that the CEO is worthy of that trust, as they will not only make those commitments to their employees, but also put the mechanisms in place to ensure the commitments are actually met. (This doesn't require any kind-hearted altruism on the CEO's part, by the way. You can get the same outcome through hard-nosed capitalism since making honest commitments to your staff and then keeping them is what "our people are our greatest asset" actually looks like in practice - it's just that a lot of organisations that say that don't actually mean it)

Commenting on other people's business

And that brings us to the specific reason I sat down to write this article: a tweet I posted earlier today regarding Mozilla's public debate over the board's choice of CEO. Specifically, I wrote:

I would take Eich accepting the Mozilla CEO role to mean his personal pride matters more to him than Mozilla's mission.

That's a pretty bold statement to make about someone I don't know and have never even met, and in relation to an organisation that I don't have any direct relationship with beyond being a user of their software and a fan of their mission.

Note the things I didn't suggest. I didn't suggest he resign from his existing position as CTO. I didn't suggest that the Mozilla board withdraw their offer of the CEO position. However, I did state that, from my perspective as an outsider that wants to see Mozilla execute on their mission to the best of their ability, "trust me" is a sufficiently big call to have to make for a role as critical as the CEO position that I don't believe Eich should be asking that of his fellow members of the Mozilla community. Actions have consequences, and one of those consequences can be "No, you no longer have the right to request our trust - you actively hurt us, and we don't believe you when you say you wish to make amends".

To Eich's credit, he at least didn't just say "trust me", but rather made a number of specific commitments. However, the time to build that credibility in a relatively open organisation is before accepting such a significant role, not after. Otherwise, there will always be a lingering doubt for affected individuals that any public statements are a matter of the responsibilities of the position, rather than a genuine change in personal convictions. When it comes to matters like a commitment to inclusiveness you don't want a CEO that is going through the motions out of a sense of obligation: this stuff is hard work and tempting to skimp on, even when you do care about it at a personal level (as a case in point - it would have been so much easier for me to not comment on this situation at all, that I almost left it at just a couple of vague allusions on Twitter rather than getting specific).

Separating the personal from the professional is always difficult, and few places moreso than the CEO role. During my tenure at Boeing, we had two CEOs asked to resign due to unprofessional conduct. The military industrial complex is a sordid mire of duplicitous misbehaviour and waste that makes the open source technology community look like saints by comparison (and I'll let you draw your own conclusions as to what it says about me personally that I survived there for more than a decade), yet even they were of the opinion that personal conduct matters at the CEO level, even moreso than in other less prominent roles.

For the record, I personally do hope Eich's newfound commitment to inclusiveness is genuine, and that the concerns raised regarding his appointment as CEO prove to be unfounded. I'd prefer to live in a world where the blog post linked above represents a genuine change of heart, rather than being merely a tactical consideration to appease particular groups.

Ultimately, though, my opinion on this topic doesn't matter - it's now up to Eich to demonstrate through his actions that he's worthy of the trust that the Mozilla board have placed in him, and for concerned members of the Mozilla community to decide whether they're willing to adopt a "wait and see" approach as suggested, or if they consider that belated request in and of itself to be an unacceptable breach of their trust.

Change the Future - one small slice of PyCon US 2013

I'm currently kicking back in Red Hat's Mountain View office (I normally work from the Brisbane office in Australia) after a lovely lunch with some of the local Red Hatters, unwinding a bit and reflecting on an absolutely amazing week at PyCon US 2013 just down the road in Santa Clara.

For me, it started last Wednesday with the Python Language Summit , an at-least-annual-sometimes-biannual get together of the developers of several major Python implementations, including CPython (the reference interpreter), PyPy, Jython and IronPython. Even with a full day, there were still a lot of interesting topics we didn't get to and will be thrashing out on the mailing lists as usual. However, good progress was made on a few of the more controversial items, and there are definitely exciting developments in store for Python 3.4 (due in early 2014, probably shortly after PyCon in Montreal if past history is anything to go by).

Thursday was a real eye-opener for me. While I did have to duck out at one point for a meeting with a couple of the other CPython developers, I spent most of it helping out at the second of the Young Coders tutorials run by Katie Cunningham and Barbara Shaurette. These tutorials were conducted using Raspberry Pi's with rented peripherals, and the kids attending received both the Pi they were using as well as a couple of introductory programming books.

Watching the class, and listening to Katie's and Barbara's feedback on what they need from us in the core substantially changed my perspective on what IDLE can (and, I think, should) become. Roger Serwy (the creator of IdleX, a version of IDLE with various improvements) has now been granted access to the CPython repo to streamline the process of fixing the reference implementation, and we're working on plans to make the behaviour of IDLE more consistent across all currently supported Python versions (including Python 2.7). (Some aspects of this, especially Roger's involvement, are similar to what happened years ago for Python 2.3 when Kurt B. Kaiser, the PSF's treasurer, shepherded the reintegration of the IDLEfork project and its major enhancements to IDLE back into the reference IDLE implementation in the Python standard library).

Friday saw the start of the conference proper, with inspirational keynotes from Jesse Noller (conference chair and PSF board member) on helping to change the future by changing the way we introduce the next generation to the computers that are now an ever-present aspect of our lives, and from Eben Upton (co-founder of the Raspberry Pi foundation), on how the Pi came to be the educational project it is today, and some thoughts on how it might evolve into the future.

Jesse's keynote included the announcement that every attendee (all 2500 of them) would be receiving a free Raspberry Pi, and that any Pi's that attendees didn't want to claim would be redistributed to various educational groups and programs. Not only that, but Jesse also announced http://raspberry.io/, a new site for sharing Raspberry Pi based projects and resources, as well as a "Rasberry Pi Hack Lab" running for the duration of the conference, where attendees could hook their Pi's up to a keyboard and monitor, as well as experiment with various bits and pieces of electronics donated by one of the conference sponsors. Richard Jones also stepped up to run some additional short introductory PyGame tutorials in the lab (he had run a full 3 hour session on PyGame as part of the paid tutorials on the Wednesday and Thursday prior to the conference).

One key personal theme for the conference revolved around the fact that I've volunteered to be Guido's delegate in making the final decisions on how we reshape Python's packaging ecosystem in the lead up to the Python 3.4 release. I'll be writing quite a bit more on that topic over the coming weeks, so here I'll just note that it started with proposing some changes to the Python Enhancement Proposal process at the language summit on the Wednesday, continued through the announcement of the coming setuptools/distribute merger on Thursday, the "packaging and distribution" mini-summit I organised for developers on the Friday night, the "Directions in Packaging" Q&A panel we conducted on the Saturday afternoon, some wonderful discussions with Simeon Franklin on his blog regarding the way the current packaging and distributions issues detract from Python's beginner friendliness and on into various interesting discussions, proposals and development at the sprints in the days following the conference.

Unfortunately, I didn't actually get to meet Simeon in person, even though I had flagged his poster as one I really wanted to go see during the poster session. Instead, I spent that time at the Red Hat booth in the PyCon Jobs Fair.  The Jobs Fair is a wonderful idea from the conference organisers that, along with the Expo Hall, recognises the multi-role nature of PyCon: as a community conference for sharing and learning (through the summits, scheduled talks, lightning talks, poster session, open spaces, paid tutorials, Young Coders sessions, Raspberry Pi hack lab, and sprints), as a way for sponsors to advertise their services to developers (through the Expo Hall and sponsor tutorials) and as a way for sponsors to recruit new developers (through the Jobs Fair). PyCon has long involved elements of all of these things (albeit perhaps not at the scale achieved this year), but having the separate Expo Hall and Jobs Fair helps keep sales and recruitment activity from bleeding into the community parts of the conference, while still giving sponsors a suitable opportunity to connect with the development community.

Both at the Jobs Fair and during the rest of the conference, I was explaining to anyone that was willing to listen what I see as Red Hat's role in bridging the vast gulf between open source software enthusiasts (professionals and amateurs alike) and people for whom software is merely a tool that either helps (hopefully) or hinders (unfortunately far too often) them in spending time on their actual job/project/hobby/etc.

I also spent a lot of time talking to people about my actual day job. I'm the development lead for one of the test systems at Red Hat, and while it is very good at what it does (full stack integration testing from hardware, through the OS and up into application software), it also needs to integrate well with other systems like autotest and OpenStack if we're going to avoid pointlessly reinventing a lot of very complicated wheels. Learning more about what those projects are currently capable of makes it easier for me to prioritize the things we work on, and make suitable choices about Beaker's overall architecture.

At the sprints, in addition to working on CPython and some packaging related questions, I also took the opportunity to catch up with the Mailman 3 developers - the open source world needs an email/web forum gateway that at least isn't actively awful, and the combination of Mailman 3 with the hyperkitty archiver is shaping up to be positively wonderful.


I didn't spend the entire conference weekend talking to people - I actually got to go see a few talks as well. All of the talks I attended were excellent, but some particular personal highlights were Mike Bayer's deep dive into SQL Alchemy's session behaviour, the panel on the Boston Python Workshop and a number of other BPW inspired education and outreach events, Mel Chua's whirlwind tour of educational psychology,  Lynn Root's educational projects for new coders (with accompanying website), Dave Malcolm's follow-up on his efforts with static analysis of all of the CPython extensions in Fedora, and Dave Beazley's ventures into automated home manufacturing of wooden toys (and destruction of laptop hard drives). There were plenty of other talks that looked interesting but I unfortunately didn't get to (one of the few downsides of having so many impromptu hallway conversations). All the PyCon US 2013 talks should be showing up on pyvideo.org as the presenters give the thumbs up, and the presentation slides are also available, so it's worth trawling through the respective lists for the topics that interest you.

In the midst of all that, Van Lindberg (PSF chairman) revealed the first public draft of the redesigned python.org (I was one of the members of the review committee that selected Project Evolution, RevSys and Divio as the drivers of this initial phase of the redesign process), and also announced the successful resolution of the PSF's trademark dispute in the EU.

This was only my second PyCon in North America (I've been to all three Australian PyCons, and attended PyCon India last year) and the first since I joined Red Hat. Meeting old friends from around the world, meeting other Pythonistas that I only knew by reputation or through Twitter and email, and meeting fellow Red Hatters that I had previously only met through IRC and email was a huge amount of fun. Attending the PyLadies charity auction, visiting the Computer History Museum with Guido van Rossum, Ned Deily and Dwayne Litzenberger (from Dropbox), chatting with Stephen Turnbull about promoting the adoption of open source and open source development practices in Japan, and getting to tour a small part of the Googleplex were just a few of the interesting bonus events from the week (and now I have a few days vacation to do the full tourist thing here in SFO).

I'm still on an adrenaline high, and there are at least a dozen different reasons why. If everything above isn't enough, there were a few other exciting developments happening behind the scenes that I can't go into yet. Fortunately, the details of those should become public over the next few weeks so I won't need to contain myself too long.

This week was intense, but awesome. All the organisers, volunteers and sponsors that played a part in bringing it together should be proud :)

A Sliding Scale of Freedom

Spideroak's launch of Crypton prompted an interesting discussion on Twitter between myself and a few others. This mostly involved some fairly common "open source" versus "free software" objections to the use of the AGPL for the open source project as a marketing tactic to drive sales of commercial licenses for Spideroak. That conversation prompted me to post the following:


Myself, I'm lazy, so I'm a fan of permissive licensing - this blog is CC0, and the open source stuff I write and license entirely myself uses the Simplified BSD License (which only has 2 clauses in it, and is pretty much limited to disclaiming warranties and saying "Hey, I wrote this"). Those license choices accurately reflect the effort I'm prepared to put into enforcing the legal rights I receive by default under current copyright regimes: absolutely none.

However, I'm not dependent on that software or this blog for my livelihood - they're a hobby, something I do because I want to, not because I need to. My lack of concern about these matters is a luxury and a privilege, because I don't need to worry about where my next meal is coming from - I have a stable job for that, with an employer I thoroughly respect and greatly enjoy working for.

Plenty of people and organizations around the world have gained value from my hobby (and will likely gain more in the future), and the pay-off I see personally is purely in terms of immediate enjoyment, long term reputation gain, and the opportunity to meet and become friends with interesting people I would never have encountered otherwise.

That means it saddens me when companies that are making their software freely available to the world are derided for not being open enough when they make the strategic decision to employ a dual licensing model, and also choose to use the GPL or AGPL to create an enforced commons on the open source side, thus making the commercial offering more attractive. They get accused of wanting to "exploit" the developers that might choose to participate in their project, because the sponsoring company controls the copyrights and can issue commercial licenses, while the third party developers "only" get to use (and customise, and redistribute) the software for free.

Being able to categorically deny such accusations is definitely one of the advantages of a "license in = license out" model for a sponsored project, where the original sponsor quickly becomes bound by the same license obligations as everyone else, but dual licensing is still several orders of magnitude better than keeping a solution proprietary.

There are many potential consumers who will consider being able to use software as more important than being able to redistribute it under a more permissive or closed license, and even for those that eventually decide they want a commercial license, dual licensing allows true "try before you buy" evaluation (since even the AGPL doesn't really kick in if you're not making your service available to the general public over the internet). Even the most ardent GPL detractors are also likely happy to use GPL software when it meets their needs, whether that's in the form of an OS (Linux), or cryptographic software (GPG), etc.

The strategic fears that lead many companies taking their hesitant first steps into the open source arena to favour copyleft licenses over permissive ones shouldn't be dismissed lightly. I'm young enough that I only caught the tail end of the proprietary Unix wars (mostly through antiquated platform specific cruft in the CPython code base), but I personally lay a lot of the credit for Linux avoiding the fragmented state of AIX/IRIX/Tru64/HP-UX/Solaris at the feet of the GPL. The legal strength of the GPL means that competitors with no reason to trust each other at the strategic level can still collaborate effectively at a technical level (up to a point, anyway).

The free software world is still a minnow in the overall software development picture, the vast majority of which is still bespoke intranet deployments. Even when those deployments are based on free or open source software, it's hardly likely to be used as a selling point to those customers. Regardless of high profile tech companies like Google and Amazon, the "cloud" is still in its infancy, and it is going to be a long time before many organisations are willing to trust cloud providers with their data. In the meantime, the likes of Microsoft, Oracle and IBM continue to make money hand over fist. Red Hat may be huge by open source company standards, and have some high profile customers, but we still have a long way to go before we're even close to matching the proprietary giants in scale and ubiquity.

The battle to convince people that sharing leads to better software is not over by any means. It still needs to be fought, and fought hard, until paying for proprietary software rather than certified open source software is an unusual aberration rather than the norm that it still is today.

The friendly fire often directed by advocates of permissive licensing against those that choose to enforce an open commons to assuage understandable fears is not helpful in that broader fight. We should be celebrating the fact that another company has taken a step towards open development, rather than lamenting the fact they didn't travel all the way from proprietary to permissive licensing in one flying leap.

PyCon India 2012

Inspired by Noufal Ibrahim's recent article on the general state of the Python community in India, I've finally written this belated report on my recent India trip :)

At the end of October, I had the good fortune to attend PyCon India 2012 in Bangalore. Sankarshan Mukhopadhyay (from Red Hat's Pune office) suggested I submit some talk proposals a few months earlier, and I was able to combine a trip to attend the conference with visits to the Red Hat offices in Bangalore and Pune. It's always good to finally get to associate IRC nicks and email addresses with people that you've actually met in person! While Sankarshan unfortunately wasn't able to make it to the conference himself, I did get to meet him when I visited Pune, and Kushal Das and Ramakrishna Reddy (also fellow Red Hatters) took great care of me while I was over there (including a weekend trip out from Pune to see the Ajanta and Ellora caves - well worth the visit, especially if you're from somewhere like Australia with no human-built structures more than a couple of hundred years old!)

While I wasn't one of the keynote speakers (David Mertz gave the Saturday keynote, and Jacob Kaplan-Moss gave an excellent "State of the Python Web" keynote on Sunday), I did give a couple of talks - one on the new features in the recent Python 3.3 release, along with a longer version of the Path Dependent Development talk that I had previously presented at PyCon AU in August. Both seemed to go over reasonably well, and people liked the way Ryan Kelly's "playitagainsam" and "playitagainsam-js" tools allowed me to embed some demonstration code directly in the HTML5 presentation for the Python 3.3 talk.

Aside from giving those two talks, this was a rather different conference for me, as I spent a lot more time in the hallway chatting with people than I have at other Python conferences. It was interesting to see quite a few folks making the assumption that because I'm a core developer, I must be an expert on all things Python, when I'm really a relative novice in many specific application areas. Fortunately, I was able to pass the many web technology related questions on to Jacob, so people were still able to get good answers to their questions, even when I couldn't supply them myself. I also got to hear about some interesting projects people are working on, such as an IVRS utility for mothers to call to find out about required and recommended vaccinations for their newborn children (I alluded to this previously in my post about my perspective on Python's future prospects).

One thing unfortunately missing from the PyCon India schedule was the target experience level for the talks, so I did end up going to a couple of talks that, while interesting and well presented introductions to the topic, didn't actually tell me anything I didn't already know. Avoiding any chance of that outcome is one of the reasons I really like attending "How we are using Python" style talks, and my favourite talk of the conference (aside from Jacob's keynote) was actually the one from Noufal Ibrahim and Anand Chitipothu on rewriting the Wayback Machine's archiving system (The other major reason I like attending such talks is that knowing I played a part, however small, in making these things possible is just plain cool).

While the volunteers involved put in a lot of effort and the conference was well attended and well worth attending, the A/V handling at the conference does still have room for improvement, as the videos linked above indicate. I've sent a few ideas to the organisers about reaching out to the PSF board for assistance and suggestions on that front. Hopefully they'll look into that a bit more for next year, as I think producing high quality talk recordings can act as excellent advertising for tech conferences in subsequent years, but doing that effectively requires a lot of preparation work both before and during the conference. There are some good resources for this now in the Python community at least in Australia and the US, so I'm hopeful that the PSF will be able to play a part in transferring that knowledge and experience to other parts of the world and we'll start seeing more and more Python conferences with recordings of a similar calibre to those from PyCon US and PyCon AU.

Python's Future: A Global Perspective

Is Python's future currently at risk? (TLDR: No)


Calvin Spealman recently posted his thoughts on various aspects of where he sees computing in general heading, and his concerns about where Python may fit in that future.

I think his concerns are somewhat valid as far as specific market segments go, but I think they're overstating the case when it comes to "the future of Python", because I think his article takes a very narrow view of the computing field.

Smartphones and tablets are the new desktop (although the desktop won't go away, it will become limited to power users with demands for precision control and complex workflows). Python has long been relatively weak on the desktop when it comes to redistributing applications, due to the need to get the interpreter installed before it can be used. Microsoft's redistribution restrictions on their C runtime has made this all the more difficult when it comes to Windows.

We also made a fairly major misstep when we failed to appropriately advertise the addition of directory and zipfile execution support in Python 2.6 (bundle your code with all its dependencies except Python into a directory or zipfile and add a __main__.py file and the Python interpreter will execute it as if it was a script. With a zip file, you can even add a shebang line to the front and flag it it as executable and a POSIX shell will pass it to Python automatically if you run it directly. I haven't tried it, but the py launcher shipped with 3.3 should also handle such files). While we later went back and added the appropriate notice to the What's New in Python 2.6 documentation, and updated the command line guide in the documentation, this capability still isn't widely known.

The complaints about dynamic language overhead on mobile devices don't hold much water for me. Smartphones now are more powerful than desktops were less than a decade ago, and Mozilla's Boot2Gecko project holds a lot of potential. While battery technology doesn't advance as fast as computing technology, Moore's Law is leveraged in the mobile space to allow more to be done with less power, reproducing the desktop (and server!) trajectories where dynamic languages were initially derided as too slow, until the hardware caught up to get them to the point of being "fast enough".

However, Python's real strengths have long been server side technology, software development by non-programmers and as an embedded scripting engine for trusted plugins (rather than those that need to be strictly isolated from, for example, a core game engine or the host OS). And in those areas, it's still powering ahead.

Widespread adoption requires being taken for granted

Install a Linux distro. Which dynamic language interpreters are pre-installed? If you're using Debian or Fedora, it will be Python and Perl. The presence of those two can pretty much be taken for granted. Ruby probably won't be there, and a standalone Javascript interpreter certainly won't be.

Apple have expressed their support for Python by building tools that rely on it (with, as far as I know, Python being the only dynamic language interpreter shipped as part of Mac OS X. Update: I'm told Apple ship Perl and Ruby as well), and Microsoft ship their Python Tools for Visual Studio bundle. Google, of course, famously chose Python as the only dynamic language supported on their App Engine platform (and they currently employ Guido van Rossum and a number of other Python core developers).

gcc and gdb both let you write plugins, and your language choices are C/C++ or Python (plus Lisp in the gcc case). Many other infrastructure level tools are going the same way. Fedora's infrastructure is almost entirely written in Python, as is OpenStack.

If you're into multimedia development, Python will be a core part of your toolset, and Python is the key open source competitor to proprietary toolsets in the scientific community. The Natural Language Toolkit is a hugely powerful resource for many data mining applications, and Python is entwined deeply into the core of the financial sector as well.

Also, just as many years ago a lot of formal education program switched from C and C++ (or Pascal or Ada, etc) to Java for introductory programming courses, many are now switching to Python, pushing Java into the role of an enterprise language used only for large and complex applications where the development overhead can be justified to some degree. Businesses are getting to the point where they can choose Python as part of their technology base while being assured of a future pool of recruits that already know the language.

Informal education programs are also favouring Python as the first "real world" application language that people are introduced to. OLPC chose Python, as did RaspberryPI. Readability counts.

The Python Africa Tour has attracted quite a bit of interest, and I believe Africa plays host to its first PyCon later this year (in South Africa). Every other continent has now hosted multiple PyCon's each year in different countries and regions.

Only one kind of client

Things are substantially more competitive on the web service front, with Rails and Django going head-to-head, and Node.js attempting to play the "you can use the same language on the frontend and the backend!" card.

As far as Node.js goes, I'm firmly convinced that if Node.js was going to be a hugely popular server side framework, Twisted would have taken over the world by now. Callback based programming is just plain hard for most humans to wrap their heads around (often even harder than threaded programming) - hence the popularity of greenlets and gevent in the Python world, which permit the use of asynchronous IO capabilities with a threading-like programming style. The ongoing efforts around tweaking generator syntax and capabilities in Python core development could legitimately be summarised as "make it possible to write Twisted code in a way that doesn't hurt people's brains quite so much and without relying on the magical stack-switching assembly code needed for greenlets".

In this space, Python's strength really lies in its ability to step away from traditional web technologies. Want to talk over a serial port to a piece of lab equipment or radio modem? Sure, we can do that. Want to talk to telco gear through a custom C extension? Sure, we have a wide range of tools to support that, too, along with some great Asterisk bindings. Python also has many web framework options, like Pyramid and Flask, that let you be more easily be selective in your choice of components than Django does.

This is important, because I just spent the past weekend here at PyCon India. While smartphones are popular amongst the largely urban professionals that make up the web development community and those they regularly associate with, they're still only available to a vanishingly small percentage of the global population. Much of the rest of the world doesn't even have access to a desktop computer let alone a smartphone. What they do have though are ordinary mobile phones (aka cellphones, for any Americans in the audience).

Added to that is the fact that the majority of the world's population is illiterate - they can understand spoken instructions, and are sufficiently numerate to press numbers on a keypad to operate an Interactive Voice Response System, but they won't be operating a smartphone any time soon, even if one was available to them.

The interfaces and language capabilities you need to reach *that* audience look nothing like those you can use to reach the smartphone toting crowd.

And that's before we even get into the potential long term implications of verbal and tactile interfaces like Siri and Baxter.

No reason to relax

All that said, while Python's future is looking very, very bright from where I sit, that's no reason to relax and assume that future is assured. Python is far from perfect, and the same can be said for the ecosystem around us.

Jacob's Sunday keynote at PyCon India spoke about the need for Python's web community to work on embracing the real time web, and lowering the barriers to entry to providing network-based realtime interactivity in Python-based web applications. It's likely any such efforts will require an update to the WSGI standards to support a streaming IO component, in addition to the current request/response model.

Tools like Kivy, that aim to make it easier to write mobile applications in Python are also an important part of extending Python's reach into areas where it is currently weak.

The recent 3.3 release included several elements aimed at making things easier for beginners (especially those on Windows), including improved error messages, an option to modify PATH in the Windows installer and the Python launcher, while the entire Python 3 series is aimed at embracing Unicode as part of the core of the language, allowing it to better reach beyond its original audience of users whose native alphabets could be expressed within the constraints of ASCII or an 8-bit encoding.

3.3 also took some of the first steps in improving the "out of the box" packaging dependency management experience, by integrating virtual environment support and namespace packages (along with making empty __init__.py files optional).

Concurrency is a problem where the overall Python ecosystem has many more options than those provided by the CPython interpreter implementation. We do offer plenty of interesting tools, especially for embarrassingly parallel problems that fit nicely into the concurrent.futures execution model. The GIL does cause problems for particular workloads, and switching to Jython or IronPython to take advantage of the free-threaded JVM and CLR implementations isn't always going to be an option. I've written far more extensively on that topic, though, so I won't repeat that here.

We should also look at ways of making it easier for other languages to interoperate with Python without an intervening C interface. Perhaps Python should ship a pycall script like this one, that makes it easy to invoke Python functions directly in a pipeline or from another application (passing JSON data in via stdin, and receiving JSON data back via stdout). Conversely, better shell integration is always worth exploring.

And, of course, our journey in rebuilding the Unicode infrastructure is ongoing. Python 3.4 is likely to bring improvements in the ability to switch the encoding of a stream "mid-flight", as well as restoring some convenience APIs for the non-Unicode related uses of the encoding and decoding methods in Python 2.

So yes, there are plenty of areas where Python can, and should, and probably will, improve. But we shouldn't lose sight of the fact that many of the problems with Python (like binary distribution, dependency management and concurrency) are problems with software development generally, so there's nowhere for people to go that will magically make those issues disappear (or else they come at the price of losing out on many of Python's other advantages, or committing to a particular platform, or some other downside).

We're a conservative community by nature - we generally don't like blazing trails when it comes to language design. Instead, we're happy to let others rush ahead, letting them figure out where the pitfalls are, while we see what we can learn from their experience and integrate into Python's syntax, standard libraries, or the Python Package Index.