Are we on the brink of achieving artificial general intelligence (AGI), or are we still a long way off? Can we trust these evolving systems, and what exactly is artificial intelligence (AI) anyway? To explore these questions, I am delighted to welcome a distinguished guest to my show today. Hello and welcome to my channel. My name is Roya Zevich, and here we discuss science, philosophy, religion, creativity, and artificial intelligence with some of the most interesting people and scholars from around the world. If this is your first time here, please consider subscribing, hitting the bell button, and supporting the channel.
Today, I am honored to have Professor Gary Marcus on the show. Professor Marcus is a scientist, author, entrepreneur, and professor in the Department of Psychology at New York University. He has founded two AI startups and authored several books, including “Guitar Zero,” “Kluge,” and most recently, “Rebooting AI,” co-written with Ernest Davis. Welcome, Gary, and thank you for joining us today.
Gary Marcus: Thanks for having me, Roya. I’m glad to be here, even though it took a few months to set up. Many thanks to my Irish colleague for making this possible.
Before we dive in, let me explain why I have mixed feelings about having you on the show. A year ago, I decided to write a book on artificial intelligence for the general public, believing we were on the cusp of significant breakthroughs. Deep learning, DeepMind’s successes, and advances in natural language processing (NLP) all seemed to indicate that AGI was within reach. Then I read your book, “Rebooting AI,” which completely changed my perspective. It made me rethink my entire approach, as it clearly argued that we are not there yet. You assert that we are on the wrong track.
Would you consider yourself the ‘bad boy’ of AI, or perhaps a prophet of impending challenges?
Gary Marcus: I wouldn’t call myself a prophet of doom, but I do think my predictions have been more accurate than many you’ll find in the media. For example, in my 2012 article in The New Yorker about deep learning, I anticipated many of the problems we’re seeing today with causal reasoning, knowledge, and generalizability.
I’m not widely known as a prophet, but if you look at the data, you’ll see that my perspective is often about caution and realism. I’m not saying AI will never happen; I believe it can, but we’re on a problematic track right now with AI systems that are untrustworthy and not very smart. This is a critical moment in AI history because the systems we have are mediocre at best and sometimes dangerously inadequate.
Roya Zevich: When I read “Rebooting AI,” one name kept coming to mind: Marvin Minsky. In 1969, Minsky, one of the founding fathers of AI, co-authored “Perceptrons” with Seymour Papert, arguing that simple perceptrons were limited in their learning capabilities. Many consider this work responsible for the AI winter that followed. Do you think your critique of current AI mirrors Minsky’s caution, but on a more fundamental level?
Gary Marcus: Minsky’s work was narrowly focused on mathematical proofs and guarantees of convergence, which are still valid concerns today. My critique is broader. It’s about the essence of intelligence and our approach to achieving it. Minsky’s book highlighted limitations in early neural networks, but today, our systems still lack a deeper understanding of intelligence. They don’t generalize well beyond their training data, which is a fundamental issue. For example, neural networks trained on even numbers struggle to generalize to odd numbers, a problem I highlighted back in 1998.
Roya Zevich: That’s a fascinating point. We still don’t understand why neural networks work the way they do, and they often fail at tasks requiring true generalization. You argue that AI is not just about computational power but about understanding intelligence. Can you elaborate?
Gary Marcus: Sure. AI today is heavily reliant on massive datasets and computational power, but it lacks the ability to understand and generalize from new situations. Human intelligence, even in children, demonstrates remarkable flexibility and generalization from limited data. AI systems, in contrast, often fail to extrapolate beyond their training data, highlighting a significant gap in our understanding and engineering of intelligence.
Roya Zevich: Your book discusses the importance of incorporating common sense and causal reasoning into AI systems. Why is this so crucial?
Gary Marcus: Common sense and causal reasoning are foundational to human cognition. They allow us to make sense of the world, anticipate outcomes, and navigate new situations. Current AI systems, however, operate largely on statistical correlations without a true understanding of causality. This limitation makes them unreliable for tasks that require more than just pattern recognition.
Roya Zevich: You mentioned that AI researchers often overlook insights from cognitive psychology and linguistics. How can we bridge this gap?
Gary Marcus: We need a more interdisciplinary approach to AI development. Cognitive psychologists, linguists, and AI researchers need to collaborate more closely. There’s an arrogance in some corners of the machine learning community, where significant progress has led to a disregard for insights from other fields. Understanding human cognition and language requires a holistic approach that incorporates knowledge from multiple disciplines.
Roya Zevich: You use the example of a neural network misclassifying an apple with a post-it note saying “iPod” as evidence of AI’s superficial understanding. Can you explain why this is significant?
Gary Marcus: This example illustrates that current AI systems lack the ability to comprehend relationships and context. They recognize patterns but don’t understand the underlying concepts. This makes them vulnerable to simple tricks and limits their ability to function in complex, real-world scenarios.
Roya Zevich: As we move towards developing more reliable AI, what steps should we take to ensure these systems can be trusted?
Gary Marcus: We need to focus on integrating more robust methods for knowledge representation, memory, and causal reasoning into AI systems. This means moving beyond deep learning and incorporating symbolic reasoning and other classical AI techniques. It’s not just about more data and computational power; it’s about smarter algorithms and better integration of diverse approaches.
Roya Zevich: Do you think we need new hardware to achieve these advancements, or is it primarily a software issue?
Gary Marcus: It’s primarily a software issue. Current hardware is capable, but we need better algorithms and data structures to leverage it effectively. We need to develop systems that can reason, learn efficiently, and integrate knowledge in a way that mirrors human cognitive processes.
Roya Zevich: Finally, how can we foster better collaboration among AI researchers and experts from other fields?
Gary Marcus: We need to build a culture of respect and openness, where contributions from cognitive psychology, linguistics, and other fields are valued and integrated. Interdisciplinary projects and collaborative research efforts will be crucial in advancing our understanding and development of AI.
Roya Zevich: Thank you so much, Gary, for this enlightening discussion. Your insights are invaluable, and your work is crucial in guiding the future of AI.
Gary Marcus: Thank you, Roya. It was a pleasure to be here.
Roya Zevich: That was Professor Gary Marcus, a leading voice in AI, urging us to reconsider our approach and integrate insights from diverse fields. If you enjoyed this discussion, please like, subscribe, and stay tuned for more conversations with leading thinkers in AI and beyond. Thank you for reading.
