I learnt a lot when I attended my very first EuroSciPy conference. Not all of it was practical. For instance, did you know Python is being used to come up with new breeds of chicken? So from animal breeding to cognitive neuroscience to astrophysics, EuroSciPy was indeed the place to be to talk with folks using Python for various forms of science.
Prior to the conference my impression was that science is all about Python, thanks in large part to the ease-of-use of the language and the excellent numpy and scipy libraries which are so widely used for manipulating and processing data. So I guess I was pretty surprised that my biggest takeaway from the Python conference were two other languages.
The first is Cython. The beauty of Python is that is it so easy to get started with and to use. But it is also well known that Python is slow. But that shouldn’t matter until later, once you’ve got your algorithms all correct, then you optimize. The traditional routes to optimize were by using Fortran (the conference featured the highest concentration of Fortran developers I’d ever been around) and C (which is the powerhouse behind numpy). But these are not always easy languages to wrangle with, as anyone who has ever had to debug a makefile can attest to. Then along came Cython and offered real performance improvements while meaning you never had to go to battle with a makefile. I learnt that some Python communities, like scikit-image, actively discourage optimizations in C as that will make the codebase much less accessible to the wider group of programmers out there, so this outweighs the performance gains. So Cython is on its way up, even if you might argue that it is not a language in its own right.
The other language, which is indeed a language in its own right is Julia. The keynote of the conference was a presentation on Julia by Steve G. Johnson of MIT. The premise of Julia is to find that sweet-spot between ease-of-use and performance. That always raises an eyebrow or two as best-of-both scenarios are very, very difficult to achieve. It’s like trying to claim you have an ideal work-life balance, yeah right. But Steve Johnson went on to make some compelling arguments by highlighting how Julia could take advantage of JIT techonology in ways Python just can’t, even if retrofitted (it comes down to ensuring better type inference). Also the crowd seemed to be won over with promises of ‘better-than-C’ performance by taking advantage of meta-programming. OK. Above and beyond that I liked how one of the key things new languages have to promote is integration. A large part of the presentation got that right. First by showing tool integration with IJulia notebook (which uses the same back-end kernel Project Jupyter, evolved from IPython). Second was language integration, including of course using Julia with Python. Julia is finding a niche in the machine-learning space, and it will be interesting to see how things progress.
So Python, with its ease-of-use will remain the glue, the enabler, the language that pulls it all together. And EuroSciPy will remain a great forum for discussing all things science, python and more. But science is just too big for just one language.