HUMAN
BEINGS
WILL
BECOME
LESS
IMPORTANT
Jürgen Schmidhuber
In the world of technology, computer scientist Jürgen Schmidhuber has been dubbed ‘the father of modern AI’. A recent guest talk he gave at Ludwig-Maximilians-Universität in Munich for the Munich Center for Machine Learning turned into a kind of academic rock star event, with a packed house, loud adulation and queues of students. 60-year-old Munich-born Schmidhuber has been director of the Dalle Molle Institute for Artificial Intelligence Research in Lugano since 1995.
Two years ago, he also joined the Computer, Electrical, and Mathematical Sciences and Engineering division at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia as director of its AI Initiative. However, his greatest developments were achieved at the Technical University of Munich (TU Munich).
Professor Schmidhuber, when did you first start working with AI?
J
S
I started pursuing publishable research into AI around 1985. But we produced a fair amount of what makes today’s forms of AI possible primarily in the years 1990 and 1991, right here at TU Munich. Like the principle used today to generate many images and deepfakes, which involves two adversarial neural networks contesting with each other and consequently improving. I published that in 1990.
-
-
-
-
What else was there?
J
S
Pretty much all the current buzzwords. For example, in 1991 I built a neural network based on the same principles used today by the transformer networks in generative AI systems like ChatGPT, enabling an AI system to learn how to predict the progression of a conversation by analysing the start. The famous concept of long short-term memory also has its roots in the year 1991, in a thesis by Sepp Hochreiter, for which I was the supervisor. In 2015 Google took it up and installed it on billions of smartphones.
-
-
-
-
Why was Germany so far advanced in fundamental research at that time, over 30 years ago?
J
S
Well, German-speaking computer science pioneers had been doing good work for much longer than that. Don’t forget that the first integrated circuit—or chip—didn’t originate in the USA, but was developed at Siemens by Werner Jacobi, a TU Munich alumnus, and patented by him in 1949. And the first transistors weren’t the product of Bell Labs in 1948. Julius Edgar Lilienfeld developed the principle of the field-effect transistor in Leipzig as early as 1925, and then patented it in the USA.
Today, virtually all transistors are field-effect transistors. It was Konrad Zuse who developed the world’s first programmable all-purpose computer between about 1936 and 1941. AI theory and the whole of theoretical computer science can be traced back to Kurt Gödel, who developed his incompleteness theorems on the fundamental limitations of arithmetic—and thus of AI—between 1931 and 1934. And modern AI, with its deep differential neural networks, ultimately has its roots in Gottfried Wilhelm Leibniz’s chain rule of 1676.
-
-
-
-
But the big money is now being made by US and Asian companies. Why has Germany missed the boat?
J
S
Around 1990, things were looking up again for the big losers of World War II. Back then, West Germany’s per capita wealth was higher than that of the USA: Some of the most famous companies were still German. That said, almost all the world’s highest-value companies were in Japan, which used to have more robots than the whole of the rest of the planet put together and was also home to leading AI researchers.
But then, things started to go downhill. The Tokyo stock market—previously bigger than the New York market—collapsed. The Soviet Union crumbled, and Germany suddenly faced a whole new set of problems about reuniting its two parts. Ever since, things haven’t gone so well. In parallel to China’s economic upswing, the USA experienced its own renaissance in the 1990s in the form of Silicon Valley. The dotcom bubble may have burst between 2000 and 2003, but there were a few companies that survived—Apple, Google, Amazon, Microsoft.
-
-
-
-
What AI applications do you see as bringing the most benefits?
J
S
For many people, medical applications are of prime importance. That was already the case ten or so years ago in 2012, when computers were a hundred times more expensive than they are today. Back then, our neural network was the first to win a competition for detecting cancer. Today the same kind of networks can also detect diabetes or arteriosclerosis. AI is revolutionising the whole field of medical image recognition. But today there are virtually no areas in which AI is absent.
Material science is an important example. It’s essential in processes such as osmosis to produce fresh water from sea water. Distillation would be too expensive, so membranes have been developed through which the sea water is forced to filter out the salt. Neural networks can now be used to model complex membrane properties and further improve them. Photovoltaic systems are another area benefiting from AI.
-
-
-
-
You once said you’d always wanted to create something that was smarter than you are. Could AGI, or artificial general intelligence, be that thing?
J
S
Ever since the 1970s, I’ve been predicting something like that could happen within my lifetime. Many of my colleagues who studied computer science were convinced it would take much, much longer—perhaps even millennia. But a lot of them have changed their views in the past years and months.
Please select an offer and read the Complete Article Issue No 15 Subscriptions
Already Customer? Please login.