MIT develops New Programming Language for High-Performance Computers

By Steve Nadis

High-performance computing is needed for an ever-growing number of tasks — such as image processing or various deep learning applications on neural nets — where one must plow through immense piles of data, and do so reasonably quickly, or else it could take ridiculous amounts of time. It’s widely believed that, in carrying out operations of this sort, there are unavoidable trade-offs between speed and reliability. If speed is the top priority, according to this view, then reliability will likely suffer, and vice versa.

However, a team of researchers, based mainly at MIT, is calling that notion into question, claiming that one can, in fact, have it all. With the new programming language, which they’ve written specifically for high-performance computing, says Amanda Liu, a second-year Ph.D. student at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), “speed and correctness do not have to compete. Instead, they can go together, hand-in-hand, in the programs we write.”

Liu — along with the University of California at Berkeley postdoc Gilbert Louis Bernstein, MIT Associate Professor Adam Chlipala, and MIT Assistant Professor Jonathan Ragan-Kelley — described the potential of their recently developed creation, “ A Tensor Language (ATL), last month at the Principles of Programming Languages conference in Philadelphia.

“Everything in our language,” Liu says, “is aimed at producing either a single number or a tensor.” Tensors, in turn, are generalizations of vectors and matrices. Whereas vectors are one-dimensional objects (often represented by individual arrows) and matrices are familiar two-dimensional arrays of numbers, tensors are n-dimensional arrays, which could take the form of a 3x3x3 array, for instance, or something of even higher (or lower) dimensions.

The whole point of a computer algorithm or program is to initiate a particular computation. But there can be many different ways of writing that program — “a bewildering variety of different code realizations,” as Liu and her co-authors wrote in their soon-to-be-published conference paper — some considerably speedier than others. The primary rationale behind ATL is this, she explains: “Given that high-performance computing is so resource-intensive, you want to be able to modify, or rewrite, programs into an optimal form in order to speed things up. One often starts with a program that is easiest to write, but that may not be the fastest way to run it, so further adjustments are still needed.”

(Courtesy: News.MIT.edu)

Image courtesy of (Image: Market Business News)

Popular articles

Related articles