Thomas G. Dietterich

Thomas G. Dietterich
Born 1954
South Weymouth, Massachusetts
Nationality American
Known for Executive Editor of Machine Learning (journal) (1992–98)
Academic background
Alma mater Naperville Central High School
Oberlin College
University of Illinois, Urbana-Champaign
Stanford University
Thesis title “Constraint-Propagation Techniques for Theory-Driven Data Interpretation”
Thesis year 1984
Doctoral advisor Bruce G. Buchanan
Academic work
Institutions Oregon State University

Thomas G. Dietterich is Emeritus Professor of computer science at Oregon State University. He is one of the founders of the field of machine learning. He served as Executive Editor of Machine Learning (journal) (1992–98) and helped co-found the Journal of Machine Learning Research. In response to the media's attention on the dangers of artificial intelligence, Dietterich has been quoted for an academic perspective to a broad range of media outlets including National Public Radio, Business Insider, Microsoft Research, CNET, and The Wall Street Journal.[1]

Among his research contributions were the invention of error-correcting output coding to multi-class classification, the formalization of the multiple-instance problem, the MAXQ framework for hierarchical reinforcement learning, and the development of methods for integrating non-parametric regression trees into probabilistic graphical models.

Biography and Education

Thomas Dietterich was born in South Weymouth, Massachusetts in 1954.[2] His family later moved to New Jersey and then again to Illinois, where Tom graduated from Naperville Central High School.[2] Dietterich then entered Oberlin College and began his undergraduate studies.[2] In 1977, Dietterich graduated from Oberlin with a degree in mathematics, focusing on probability and statistics.[2]

Dietterich spent the following two years at the University of Illinois, Urbana-Champaign.[2] After those two years, he began his doctoral studies in the Department of Computer Science at Stanford University.[2] Dietterich received his Ph. D. in 1984 and moved to Corvallis, Oregon, where he was hired as an Assistant Professor in Computer Science.[2] In 2016, Dietterich retired from his position at Oregon State University.[2]

Throughout his career, Dietterich has worked to promote scientific publication and conference presentations. For many years, he was the editor of the MIT Press series on Adaptive Computation and Machine Learning.[3] He also held the position of co-editor of the Morgan Claypool Synthesis Series on Artificial Intelligence and Machine Learning. He has organized several conferences and workshops including serving as Technical Program Co-Chair of the National Conference on Artificial Intelligence (AAAI-90), Technical Program Chair of the Neural Information Processing Systems (NIPS-2000) and General Chair of NIPS-2001. He served as founding President of the International Machine Learning Society and he has been a member of the IMLS Board since its found. He is currently also a member of the Steering Committee of the Asian Conference on Machine Learning.

Research Interests

Professor Dietterich is interested in all aspects of machine learning. There are three major strands of his research. First, he is interested in the fundamental questions of artificial intelligence and how machine learning can provide the basis for building integrated intelligent systems. Second, he is interested in ways that people and computers can collaborate to solve challenging problems.And third, he is interested in applying machine learning to problems in the ecological sciences and ecosystem management as part of the emerging field of computational sustainability.

Over his career, he has worked on a wide variety of problems ranging from drug design to user interfaces to computer security. His current focus is on ways that computer science methods can help advance ecological science and improve our management of the Earth's ecosystems. This passion has led to several projects including research in wildfire management, invasive vegetation and understanding the distribution and migration of birds. For example, Dietterich's research is helping scientists at the Cornell Lab of Ornithology answer questions like: How do birds decide to migrate north? How do they know when to land and stop over for a few days? How do they choose where to make a nest? Tens of thousands of volunteer birdwatchers (citizen scientists) all over the world contribute data to the study by submitting their bird sightings to the eBird website. The amount of data is overwhelming - in March 2012 they had over 3.1 million bird observations. Machine learning can uncover patterns in data to model the migration of species. But there are many other applications for the same techniques which will allow organizations to better manage our forests, oceans, and endangered species, as well as improve traffic flow, water systems, the electrical power grid, and more.[4]

"I realized I wanted to have an impact on something that really mattered - and certainly the whole Earth's ecosystem, of which we are a part, is under threat in so many ways. And so if there's some way that I can use my technical skills to improve both the science base and the tools needed for policy and management decisions, then I would like to do that. I am passionate about that."[4]

Dangers of AI: An Academic Perspective

The most realistic risks about the dangers of artificial intelligence are basic mistakes, breakdowns and cyber attacks, Thomas Dietterich, an expert in the field says - more so than machines that become super powerful, run amok and try to destroy the human race.[5]

"For a long time the risks of artificial intelligence have mostly been discussed in a few small, academic circles, and now they are getting some long-overdue attention," Dietterich said. "That attention, and funding to support it, is a very important step."[5]

Dietterich's perspective of problems with AI, however, is a little more pedestrian that most - not so much that it will overwhelm humanity, but that like most complex engineered systems, it may not always work.[5]

"We are now talking about doing some pretty difficult and exciting things with AI, such as automobiles that drive themselves, or robots that can effect rescues or operate weapons," Dietterich said. "These are high-stakes tasks that will depend on enormously complex algorithms." "The biggest risk is that those algorithms may not always work. We need to be conscious of this risk and create systems that can still function safely even when AI components commit errors."[5]

Dietterich said he considers machines becoming self-aware and trying to exterminate humans to be more science fiction than scientific fact. But to the extent that computer systems are given increasingly dangerous tasks, and asked to learn from and interpret their experiences, he said they may simply make mistakes.[5]

"Computer systems can already beat humans at chess, but that doesn't mean they can't make a wrong move. They can reason, but that doesn't mean they always get the right answer. And they may be powerful, but that's not the same thing as saying they will develop superpowers."[5]

Dietterich believes that more immediate and real risks will be to identify how mistakes might occur, and how to create systems that can help deal with, minimize, or accommodate them. He believes that some of the most imminent threats computers will pose in a malicious sense will probably emerge as a result of cyber attacks. Humans with malicious intent using artificial intelligence and powerful computers to attack other computer systems are a real threat and thus would be a good place to focus the first work in this field.[5]

Dietterich has been noticed by many media outlets for giving his academic perspective on the dangers of artificial intelligence. He was the plenary speaker at Wait What? a future technology forum hosted by DARPA on September 11, 2015. In an article by Digital Trends in February 2015, Dietterich was tapped for his expertise on the topic:[1]

Dietterich lists bugs, cyber-attacks and user interface issues as the three biggest risks of artificial intelligence - or any other software, for that matter. "Before we put computers in control of high-stakes decisions," he says, "our software systems must be carefully validated to ensure that these problems do not arise." It's a matter of steady stable progress with great attention to detail, rather than the "apocalyptic doomsday scenarios" that can so easily capture the imagination when discussing AI.[6]

In July 2015, Dietterich was interviewed for NPR's On Point Managing the Artificial Intelligence Risk". Dietterich was also featured by Business Insider,[7] Business Insider Australia,[8] FedScoop,[9] CNET,[10] Microsoft Research,[11] PC Magazine,[12] TechInsider,[13] U.S. Department of Defense;[14] was filmed by Communications of the ACM and KEZI; and has been mentioned in articles by The Wall Street Journal,[15] Tech Times,[16] and The Corvallis Advocate.[1][17]

Positions Held

Awards and Honors

Thomas Dietterich was honored by Oregon State University in the spring of 2013 as a "Distinguished Professor" for his work as a pioneer in the field of machine learning and being one of the mostly highly cited scientists in his field.[26] He has also earned exclusive "Fellow" status in the Association for the Advancement of Artificial Intelligence, the American Association for the Advancement of Science and the Association for Computing Machinery.[4] Over his career, he obtained more than $30 million in research grants, helped build a world-class research group at Oregon State, and created three software companies. He also co-founded two of the field's leading journals and was elected first president of the International Machine Learning Society.[19]

His other awards and honors include:

Selected Publications

References

  1. 1 2 3 Robertson, Rachel. "EEC's Tom Dietterich at the Forefront of AI Dialog". Oregon State University: OSU EECS News. Oregon State University. Retrieved 17 August 2016.
  2. 1 2 3 4 5 6 7 8 9 Peterson, Chris. "Tom Dietterich Oral History Interview". Oregon State University: OSU Libraries. Oregon State University. Retrieved 19 August 2016.
  3. Dietterich, Thomas G. "Titles by this editor". The MIT Press. The MIT Press. Retrieved 19 August 2016.
  4. 1 2 3 Dietterich, Thomas. "Tom Dietterich Profile". Oregon State University: Electrical Engineering and Computer Science. Oregon State University. Retrieved 17 August 2016.
  5. 1 2 3 4 5 6 7 Stauth, David. "Expert: Artificial intelligence systems more apt to fail than to destroy". Oregon State University: News and Research Communications. Oregon State University. Retrieved 17 August 2016.
  6. Jones, Brad. "Is Cortana A Dangerous Step Towards Artificial Intelligence?". Digital Trends. Designtechnica Corp. Retrieved 17 August 2016.
  7. Del Prado, Guia Marie. "This is the biggest shift going on in artificial intelligence". Business Insider. Business Insider Inc. Retrieved 18 August 2016.
  8. Del Prado, Guia Marie. "How much you should worry about 3 common 'robot apocalypse' scenarios". Business Insider Australia. Business Insider Inc. Retrieved 18 August 2016.
  9. Otto, Greg. "Why AI's future is risky (but not scary)". FedScoop. FedScoop. Retrieved 18 August 2016.
  10. Raghian, Ardalan; Renda, Matthew. "When Hollywood does AI, it's fun but farfetched". CNet. CBS Interactive Inc. Retrieved 18 August 2016.
  11. Linn, Allison. "Artificial intelligence is raising concerns, and here's what researchers are doing to address them". Microsoft Research. Microsoft. Retrieved 18 August 2016.
  12. Stuart, Sophia. "What Keeps AI Experts Up at Night?". PC Magazine. PCMag Digital Group. Retrieved 18 August 2016.
  13. Del Prado, Guia Marie. "Stephen Hawking warns of an 'intelligence explosion'". TechInsider. Business Insider Inc. Retrieved 18 August 2016.
  14. Pellerin, Cheryl. "DARPA Tech Forum Previews National Security Future". U.S. Department of Defense. U.S. Department of Defense. Retrieved 18 August 2016.
  15. Wladawsky-Berger, Irving. "What Should We Think of Machines That Think?". The Wall Street Journal. Dow Jones & Company Inc. Retrieved 18 August 2016.
  16. Maynardq, James. "Elon Musk Donates $10 Million to Keep Us Safe from Artificial Intelligence". Tech Times. TechTimes Inc. Retrieved 19 August 2016.
  17. Reilly, Sidney. "Skynet Is Here to Enslave Us". The Corvallis Advocate. Corvallis Advocate. Retrieved 19 August 2016.
  18. "AAAI Officials". Association for the Advancement of Artificial Intelligence. Association for the Advancement of Artificial Intelligence. Retrieved 19 August 2016.
  19. Woerd, Josver. "BigML's Chief Scientist Elected President at AAAI". BigML. BigML. Retrieved 19 August 2016.
  20. Dietterich, Thomas. "Thomas G. Dietterich Home Page". Thomas G. Dietterich Home Page. Oregon State University. Retrieved 17 August 2016.
  21. Adams, Ron. "OSU Spin-off Company Created, Acquired by Seattle Firm". Oregon State University: News and Research Communications. Oregon State University. Retrieved 19 August 2016.
  22. Earnshaw, Aliza. "MyStrands raises $24 million". Portland Business Journal. American City Business Journals. Retrieved 19 August 2016.
  23. Dietterich, Thomas. "Thomas G. Dietterich". Oregon State University: Electrical Engineering and Computer Science. Oregon State University. Retrieved 17 August 2016.
  24. 1 2 3 4 5 6 7 8 9 10 11 12 Dietterich, Thomas. "Curriculum Vita" (PDF). Retrieved 17 August 2016.
  25. 1 2 3 4 5 "Faculty Awards". Oregon State University: Electrical Engineering and Computer Science. Oregon State University. Retrieved 19 August 2016.
  26. Stauth, David. "Dietterich Named AAAS Fellow". Oregon State University: News and Research Communications. Oregon State University. Retrieved 19 August 2016.
  27. Adams, Ron. "OSU Engineering Faculty, Staff, Students Honored". Oregon State University: News and Research Communications. Oregon State University. Retrieved 19 August 2016.
  28. Stauth, David. "College of Engineering awards presented". Oregon State University: News and Research Communications. Oregon State University. Retrieved 19 August 2016.
  29. Liu, Liping; Dietterich, Thomas G.; Li, Nan; Zhou, Zhi-Hua (2016). "Transductive Optimization of Top k Precision" (PDF). International Joint Conference on Artificial Intelligence: 1781–1787.
  30. Siddiqui, Md Amran; Fern, Alan; Dietterich, Thomas G.; Das, Shubhomoy (2016). "Finite Sample Complexity of Rare Pattern Anomaly Detection". Uncertainty of Artificial Intelligence.
  31. Taleghan, Majid Alkaee; Dietterich, Thomas G.; Crowley, Mark; Hall, Kim; Albers, H. Jo (December 2015). "PAC Optional MDP Planning with Application to Invasive Species Management" (PDF). Journal of Machine Learning Research. 16: 3877–3903. Retrieved 17 August 2016.
  32. Dietterich, Thomas G.; Horvitz, Eric J. (2015). "Viewpoint Rise of Concerns about AI: Reflections and Directions" (PDF). Communications of the ACM: 38–40. doi:10.1145/2770869. Retrieved 17 August 2016.
  33. Dietterich, Thomas G. (2009). "Machine Learning in Ecosystem Informatics and Sustainability. Abstract of Invited Talk" (PDF). Proceedings of the 2009 International Joint Conference on Artificial Intelligence. Retrieved 18 August 2016.
  34. Dietterich, Thomas G.; Bao, Xinlong; Keiser, Victoria; Shen, Jianqiang (2010). "Machine Learning Methods for High Level Cyber Situation Awareness" (PDF). Cyber Situational Awareness: 227–247.
  35. Dietterich, Thomas G.; Domingos, Pedro; Getoor, Lise; Muggleton, Stephen; Tadepalli, Prasad (2008). "Structured machine learning: the next ten years" (PDF). Machine Learning: 3–23. doi:10.1007/s10994-5079-1. Retrieved 18 August 2016.
  36. Dietterich, Thomas G.; Bao, Xinlong (2008). "Integrating Multiple Learning Components Through Markov Logic" (PDF). Twenty-third Conference on Artificial Intelligence (AAAI-2008): 622–627. Retrieved 18 August 2016.
  37. Dietterich, Thomas G. (2007). "Machine Learning in Ecosystem Informatics" (PDF). Proceedings of the Tenth International Conference on Discovery Science. 4755. Retrieved 18 August 2016.
  38. Dietterich, Thomas G. (2004). "Learning and Reasoning" (PDF). Technical report, School of Electrical Engineering and Computer Science, Oregon State University. Retrieved 18 August 2016.
  39. Dietterich, Thomas G. (2003). "Machine Learning" (PDF). Nature Encyclopedia of Cognitive Science. Retrieved 18 August 2016.
  40. Dietterich, Thomas G. (2002). "Machine Learning for Sequential Data: A Review" (PDF). Structural, Syntactic, and Statistical Pattern Recognition; Lecture Notes in Computer Science. 2396: 15–30. Retrieved 18 August 2016.
  41. Thomas G. Dietterich (2002). Arbib, M. A., ed. The Handbook of Brain Theory and Neural Networks (2 ed.). Cambridge, MA: The MIT Press. pp. 405–408.
  42. Dietterich, Thomas G. (2000). "The Divide-and-Conquer Manifesto". Algorithmic Learning Theory 11th International Conference (ALT 2000): 13–26.
  43. Dietterich, Thomas G. (2000). "Hierarchical reinforcement learning with the MAXQ value function decomposition". Journal of Artificial Intelligence Research. 13: 227–303.
  44. Thomas G. Dietterich (2000). Hemmendinger, David; Ralston, Anthony; Reilly, Edwin, eds. The Encyclopedia of Computer Science (4 ed.). Thomson Computer Press. pp. 1056–1059.
  45. Thomas G. Dietterich (2000). Choueiry, B. Y.; Walsh, T., eds. Proceedings of the Symposium on Abstraction, Reformulation, and Approximation (SARA 2000), Lecture Notes in Artificial Intelligence. Springer-Verlag. pp. 26–44.

Thomas G. Dietterich

This article is issued from Wikipedia - version of the 11/14/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.