Jürgen Schmidhuber
Jürgen Schmidhuber | |
---|---|
Born |
Munich, Germany | January 17, 1963
Residence | Switzerland |
Nationality | German |
Fields | Artificial intelligence |
Institutions | Dalle Molle Institute for Artificial Intelligence Research |
Alma mater | Technische Universität München |
Jürgen Schmidhuber (born 17 January 1963 in Munich) is a German academic computer scientist and artist known for his work on machine learning, artificial intelligence (AI), artificial neural networks, digital physics, and low-complexity art. Schmidhuber was professor of Cognitive Robotics at the Technische Universität München before joining the faculty of the University of Lugano in 2009 as a professor of Artificial Intelligence, where his contributions include generalizations of Kolmogorov complexity and the Speed Prior. He has been co-director of the Swiss AI Lab, IDSIA, in Lugano since 1995. The algorithms based on concepts developed by Schmidhuber's research team are in used broadly, e.g., in Google's speech recognition in smartphones, and their neural networks have repeatedly won international competitions in pattern recognition and machine learning. Schmidhuber has been the recipient of various awards, including the 2016 IEEE CIS Neural Networks Pioneer Award and the 2013 Helmholtz Award of the International Neural Networks Society. He has been a member of the European Academy of Sciences and Arts since 2008.
Early life and education
Jürgen Schmidhuber was born on 17 January 1963 in Munich. Schmidhuber did his undergraduate studies at Technische Universität München.
Career
Schmidhuber is currently a co-director of the Swiss Dalle Molle Institute for AI (IDSIA),[1] in Manno-Lugano,[2] a position he has held since 1995.
He was professor of Cognitive Robotics at the Technische Universität München from 2004 to 2009. Since 2009, he has also served as professor of Artificial Intelligence at the University of Lugano.
Contributions
Recurrent neural networks
The dynamic recurrent neural networks developed in his lab are simplified mathematical models of the biological neural networks found in human brains. A particularly successful model of this type is called Long short-term memory (LSTM), reported by Sepp Hochreiter and Schmidhuber in 1997,[3] under development since 1995, and now based on deep learning. From training sequences, LSTM can "learn" to solve numerous tasks, some of which were unsolvable by previous such models. Applications range from automatic music composition, and speech recognition, to reinforcement learning, and robotics in partially observable environments. Schmidhuber's group reported the results on benchmarks in automatic handwriting recognition obtained with deep neural networks,[4] and with recurrent neural networks,[5] the best results for each category as of 2010.
The LSTM approach is used for numerous applications, including, as of 2015, in Google's speech recognition implementation in smartphones using their software.[1][6]
Artificial evolution / genetic programming
During his undergraduate studies, Schmidhuber did work on evolving computer programs through genetic algorithms. The method he went on to develop was published in 1987. It is reportedly one of the first papers to have appeared in the field that later became known as genetic programming. In the same year, he published what has been referred to as the first work on Meta-genetic programming. Since then, he has proceeded to co-author numerous additional papers on artificial evolution. Applications in his work include robot control, soccer learning, drag minimization, and time series prediction.
Neural economy
In 1989 he created a learning algorithm, reportedly the first, for neural networks based on principles of the market economy (inspired by John Holland's bucket brigade algorithm for classifier systems): adaptive neurons compete for being active in response to certain input patterns; those that are active when there is external reward get stronger synapses, but active neurons have to pay those that activated them, by transferring parts of their synapse strengths, thus rewarding "hidden" neurons setting the stage for later success.[7]
Artificial curiosity and creativity
In 1990 he began publishing a long series of papers on artificial curiosity and creativity for an autonomous agent. The agent is equipped with an adaptive predictor trying to predict future events from the history of previous events and actions. A reward-maximizing, reinforcement learning, adaptive controller is steering the agent and gets curiosity reward for executing action sequences that improve the predictor. This discourages it from executing actions leading to boring outcomes that are either predictable or totally unpredictable.[8] Instead the controller is motivated to learn actions that help the predictor to learn new, previously unknown regularities in its environment, thus improving its model of the world, which in turn can greatly help to solve externally given tasks. This has become an important concept of developmental robotics. Schmidhuber argues that his corresponding formal theory of creativity explains essential aspects of art, science, music, and humor.[9]
Unsupervised learning / factorial codes
During the early 1990s Schmidhuber also invented a neural method for nonlinear independent component analysis (ICA) called predictability minimization. It is based on co-evolution of adaptive predictors and initially random, adaptive feature detectors processing input patterns from the environment. For each detector there is a predictor trying to predict its current value from the values of neighboring detectors, while each detector is simultaneously trying to become as unpredictable as possible.[10] It can be shown that the best the detectors can do is to create a factorial code of the environment, that is, a code that conveys all the information about the inputs such that the code components are statistically independent, which is desirable for many pattern recognition applications.
Kolmogorov complexity / computer-generated universe
Schmidhuber published a paper in 1997 entitled a "computer scientist's view of life, the universe, and everything,"[11] which addressed Konrad Zuse's 1967 assumption that the history of the universe is computable.[12] He pointed out that the simplest explanation of the universe would be a very simple Turing machine programmed to systematically execute all possible programs computing all possible histories for all types of computable physical laws.[11] He also pointed out that there is an optimally efficient way of computing all computable universes based on Leonid Levin's universal search algorithm (1973). In 2000 he expanded this work by combining Ray Solomonoff's theory of inductive inference with the assumption that quickly computable universes are more likely than others.[13] This work on digital physics also led to limit-computable generalizations of algorithmic information or Kolmogorov complexity and the concept of Super Omegas, which are limit-computable numbers that are even more random (in a certain sense) than Gregory Chaitin's number of wisdom Omega.[14]
Universal AI
His group has also done work on universal learning algorithms and universal AI.[15][16][17] Contributions include a theoretically optimal decision-maker living in an environment obeying arbitrary unknown but computable probabilistic laws (reportedly the first,), and guidance provided during the fellowship of postdoctoral collaborator Marcus Hutter, who developed mathematically sound general problem solvers such as the remarkable asymptotically fastest algorithm for all well-defined problems.
Based on the theoretical results obtained in the early 2000s, Schmidhuber is actively promoting the view that in the new millennium the field of general AI has matured and become a real formal science.
Low-complexity art / theory of beauty
Schmidhuber's low-complexity artworks (since 1997) can be described by very short computer programs containing very few bits of information, and reflect his formal theory of beauty[18] based on the concepts of Kolmogorov complexity and minimum description length.
Schmidhuber writes that since age 15 or so his main scientific ambition has been to build an optimal scientist, then retire. First he wants to build a scientist better than himself (he quips that his colleagues claim that should be easy) who will then do the remaining work. He claims he "cannot see any more efficient way of using and multiplying the little creativity he's got".
Robot learning
In recent years a robotics group with focus on intelligent and learning robots, especially in the fields of swarm and humanoid robotics was established at his lab. The lab is equipped with a variety of mobile and flying robots and is one of the around 20 labs in the world owning an iCub humanoid robot. The group has applied a variety of machine learning algorithms, such as reinforcement learning and genetic programming, to improve adaptiveness and autonomy of robotic systems.
Recently his work on evolutionary robotics, with a focus on using genetic programming to evolve robotic skills, especially in robot vision have allowed for quick and robust object detection in humanoid robots.[19][20][21] IDSIA's work with the iCub humanoid won the 2013 AAAI Student Video competition.[22]
Commercial interests
Schmidhuber, Sepp Hochreiter, Jaan Tallinn, Faustino Gomez, and others created a business entity, Nnaisense, in 2014 to work on “general purpose” artificial intelligence applications, and to otherwise attempt to commercialize the typed of technologies that Schmidhuber has focused upon in his academic research. Based in Lugano near to Schmidhuber's academic operation, the company has developed partnerships "in finance, autonomous vehicles and heavy industry" under direction from its chief executive, Gomez (a former ISDIA postdoctoral fellow who training in the U.S.) with guidance that includes Tallinn (a co-founder of Skype) and Hochreiter (at Johannes Kepler University in Linz, Austria).[1]
Awards and recognition
Schmidhuber and his collaborators have received several best paper awards at scientific conferences on evolutionary computation.
Schmidhuber was elected to the European Academy of Sciences and Arts in 2008.[23][24]
The recurrent neural networks and deep feedforward neural networks developed in Schmidhuber's research group won eight international competitions in pattern recognition and machine learning between 2009 and 2012.[25]
He has received further awards, including the 2013 Helmholtz Award of the International Neural Networks Society, and the 2016 IEEE CIS Neural Networks Pioneer Award.[26]
References
- 1 2 3 Markoff, John (November 27, 2016). "When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'". The New York Times. Retrieved 2016-11-27.
[Quote:] In 1997, Dr. Schmidhuber and Sepp Hochreiter published a paper on a technique that has proved crucial in laying groundwork for the rapid progress that has been made recently in vision and speech. The idea, known as Long Short-Term Memory, or LSTM, was not widely understood when it was introduced. It essentially offered a form of memory or context to neural networks. / Just as humans do not restart learning from scratch every second, a certain type of neural network adds loops or memory that interpret each new word or observation in light of what has been previously observed. LSTM strikingly improved these networks, leading to huge jumps in accuracy. / It may be that Dr. Schmidhuber’s misfortune is that he was simply too early — a few years ahead of the powerful and more affordable computers we have today. It was not until recently that his concepts started to pan out. / Last year, for example, Google researchers reported that they had used LSTM to cut transcription errors in their speech recognition service by up to 49 percent. It was a huge increase after years of incremental progress.
- ↑ IDSIA Staff (November 29, 2016). "Robotics Lab". IDSA.ch. Manno-Lugano, CHE: Istituto Dalle Molle sull'Intelligenza Artificiale (IDSIA). Retrieved 29 November 2016.
- ↑ S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735–1780, 1997.
- ↑ D. C. Ciresan, U. Meier, L. M. Gambardella, J. Schmidhuber (2010). Deep Big Simple Neural Nets For Handwritten Digit Recognition. Neural Computation 22(12): 3207-3220.
- ↑ A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, J. Schmidhuber (2009). A Novel Connectionist System for Improved Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(5).
- ↑ [Gulcehre, Caglar]; et al. (September 30, 2015). "Long Short-Term Memory Dramatically Improves Google Voice Etc.—Now Available to a Billion Users". DeepLearning.net. Retrieved 2016-11-27.
- ↑ J. Schmidhuber. A local learning algorithm for dynamic feedforward and recurrent networks. Connection Science, 1(4):403–412, 1989
- ↑ J. Schmidhuber. Curious model-building control systems. In Proc. International Joint Conference on Neural Networks, Singapore, volume 2, pages 1458–1463. IEEE, 1991
- ↑ J. Schmidhuber. Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010). IEEE Transactions on Autonomous Mental Development, 2(3):230–247, 2010.
- ↑ J. Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863–879, 1992
- 1 2 J. Schmidhuber. "A Computer Scientist's View of Life, the Universe, and Everything," Foundations of Computer Science: Potential – Theory – Cognition, Lecture Notes in Computer Science, pages 201–208, Springer, 1997.
- ↑ Greene, Brian (2011). "Universes, Computers, and Mathematical Reality: The Simulated and Ultimate Multiverses". The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos. New York: Knopf Doubleday. pp. 314–352. ISBN 0307595250. Retrieved 29 November 2016.
- ↑ J. Schmidhuber. The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions. Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002), Sydney, Australia, LNAI, 216–228, Springer, 2002
- ↑ J. Schmidhuber. Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science 13(4):587–612, 2002
- ↑ J. Schmidhuber. Ultimate Cognition à la Gödel. Cognitive Computation 1(2):177–193, 2009
- ↑ J. Schmidhuber. Optimal Ordered Problem Solver. Machine Learning, 54, 211–254, 2004
- ↑ See the article on Gödel machines.
- ↑ J. Schmidhuber. Low-Complexity Art. Leonardo, Journal of the International Society for the Arts, Sciences, and Technology, 30(2):97–103, MIT Press, 1997
- ↑ J. Leitner, S. Harding, P. Chandrashekhariah, M. Frank, A. Förster, J. Triesch and J. Schmidhuber. Learning Visual Object Detection and Localisation Using icVision. Biologically Inspired Cognitive Architectures, Vol. 5, 2013.
- ↑ J. Leitner, S. Harding, M. Frank, A. Förster and J. Schmidhuber. Humanoid Learns to Detect Its Own Hands. IEEE Congress on Evolutionary Computing (CEC), 2013.
- ↑ S. Harding, J. Leitner and J. Schmidhuber. Cartesian Genetic Programming for Image Processing (CGP-IP). Genetic Programming Theory and Practice X (Springer Tract on Genetic and Evolutionary Computation). pp 31-44. ISBN 978-1-4614-6845-5. Springer, Ann Arbor, 2013.
- ↑ Stollenga, Marijn; Pape, Leo; Frank, Kail; Leitner, Juergen; Förster, Alexander & Schmidhuber, Jürgen (2013). Task Relevant Roadmaps: iCub Demonstations (streaming video). Palo Alto, CA: Association for the Advancement of Artificial Intelligence (AAAI). Retrieved 29 November 2016.
- ↑ O'Leary, Dave (October 3, 2016). "The Present and Future of AI and Deep Learning Featuring Professor Jürgen Schmidhuber". IT World Canada. Retrieved 29 November 2016.
- ↑ "European Academy of Sciences and Arts". euro-acad.eu. Retrieved 29 November 2016.
- ↑ Angelica, Amara D. & Schmidhuber, J. (November 28, 2012). "How bio-inspired deep learning keeps winning competitions" (interview). KurzweilAI.net. Retrieved 29 November 2016.
- ↑ IEEE Staff (November 29, 2016). "Award Recipients, Neural Networks Pioneer Award". CIS.IEEE.org. Piscataway, NJ: IEEE Computational Intelligence Society. Retrieved 29 November 2016.
Further reading
- Markoff, John (November 27, 2016). "When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'". The New York Times. Retrieved 2016-11-27.
- [Gulcehre, Caglar]; et al. (September 30, 2015). "Long Short-Term Memory Dramatically Improves Google Voice Etc.—Now Available to a Billion Users". DeepLearning.net. Retrieved 2016-11-27.
- Sak, Haşim ; Senior, Andrew ; Rao, Kanishka; Beaufays, Françoise & Schalkwyk, Johan (September 24, 2015). "Google Voice Search: Faster and More Accurate". Google Research Blog. Retrieved 29 November 2016.
- O'Leary, Dave (October 3, 2016). "The Present and Future of AI and Deep Learning Featuring Professor Jürgen Schmidhuber". IT World Canada. Retrieved 29 November 2016.
- Schmidhuber, Jürgen & Kurzweil AI Staff (November 29, 2016). "Contributors: Jürgen Schmidhuber" (self-published bio). KurzweilAI.net. Retrieved 29 November 2016.
- Scholarpedia article on Universal Search, discussing Schmidhuber's Speed Prior, Optimal Ordered Problem Solver, Gödel machine
- German article on Schmidhuber in CIO magazine: "Der ideale Wissenschaftler" [Transl., "The Ideal Scientist]
- Build An Optimal Scientist, Then Retire: Interview in H+ magazine, 2010
- Neural Networks and Financial Markets: Interview in International Business Times, 2016
- Space is made for robots (in German): Interview in Der Spiegel, 2016
- Schmidhuber wants to build highly intelligent robot (in German): Interview in FAZ, 2016
External links
Video presentations
- Videos of Juergen Schmidhuber & the Swiss AI Lab IDSIA
- On-going research on the iCub humanoid at the IDSIA Robotics Lab
- Video of Schmidhuber's talk on artificial curiosity and creativity at the Singularity Summit 2009, NYC
- TV clip: Schmidhuber on computable universes on Through the Wormhole with Morgan Freeman.