The Language of the Future
Most of the popular languages today (Perl/Python/Php/Ruby, C++, Java) are all locked into the serial, von-Neumann model of computation. Concurrency is expressed as Threads and Processes, which rely on the underlying OS’s handling of memory and other resources. Joe Armstrong has demonstrated that this just isn’t good enough; the programmer needs to have truly independent processes together with the ability to spawn them with nearly zero cost. Because we’ve hit a point of diminishing returns trying to optimize single CPU’s the industry has begun the move toward multi-core machines. This only benefits programs that are written with concurrency in mind. So the language of the future will have extremely lightweight concurrency built-in (just like Erlang).
But what new problems will it solve? One of the best, and often overlooked, problem domains is that of constraint solving. These problems show up everywhere, especially scheduling and routing. Business would love to have a language specifically geared toward solving problems of resource allocation. Prolog already does this very well, but never caught on. I think that many constraint problems can be solved even more rapidly by exploiting extreme concurrency. So the language of the future will popularize an old class of problems: the constraint problem; but it will do so with a parallel computing model, and force us to think about our problems in a different way.
I would like the new language to have a clean syntax, like Python. It should be easy to read, and easy for the programmer to manipulate. It should focus on clean syntax primarily because it’s hard to teach an old dog new tricks; most programmers were brought up on the serial model of computation, so they’ll have a bunch of new ideas to swallow, syntax shouldn’t throw them off. It’ll also have to be competitive at solving most of the existing problems: it’ll need to have lots of built-in convenience functions for string manipulation, have a well-organized GUI library, come bundled with library’s for networking, html, parsing, regex, numerical analysis, etc, etc…
I still don’t yet know wether it’ll be imperative or functional, but it’ll probably wind up being functional, which would slow adoption rates.
Update: LtU discussion and related paper
Hi Eric,
There is also the beginnings of using Graphics chips for non-graphics computations which is a different sort of parallism; and the use of FPGA’s to tailor the hardware on the fly, to the problem.
We live in interesting times.
– Paddy.