I’m asking the students in both of my courses on AI to pay close attention to this statement by Yann LeCun, the chief AI scientist at Meta:
“AI will make dramatic progress over the next decade. There is no question that at some point in the future, AI systems will match and surpass human intellectual capabilities… It will not happen tomorrow, probably over the next decade or two.”
That’s an excerpt from LeCun’s talk to the UN Security Council. This 8-minute talk (starting at the 12-minute mark in the video), linked in the description, is packed with insights. LeCun is not some fringe scientist. He’s a leading figure in AI research. When he makes a prediction like this, it’s worth paying attention.
What’s striking is that LeCun comes right out and says that this need not be a frightening scenario.
“By amplifying human intelligence, AI may bring not just a new industrial revolution, but a new renaissance, a new period of enlightenment for humanity.”
Training and fine-tuning
Let's break down a core concept in machine learning that is usually not realized by the average person.
“AI systems are produced in two phases. One is training a foundation model, and the second phase is fine-tuning it for a particular application.”
The distinction between training and fine-tuning is very important. My data for deep learning class is focused on the training phase. My responsible AI class is focused on the applications that arise from fine-tuning foundation models for specific uses. I’ll go further into the distinction between training and fine-tuning in another post.
LeCun gets specific about the steps that are needed for ensuring that the benefits of AI are shared equitably across the globe.
“Foundation models must be trained on all the world's cultural materials in all languages if we want them to be accessible and useful to everyone around the world.
Since all of our digital diet will eventually be mediated by AI systems, fine-tuned systems need to be numerous, diverse, to represent all cultures and value systems around the world.
Two conditions are necessary for this to happen. Foundation models must be free and open source. And second, training must be performed in a collaborative and distributed fashion in multiple data centers around the world.”
Let’s repeat that. Make sure you understand these points:
Foundation models must be trained on all the world's cultural materials in all languages.
Fine-tuned systems, i.e., AI-enabled apps, must represent all cultures and value systems around the world
Two necessary conditions to accomplish global AI development:
Foundation models must be free and open source
Training must be collaborative, distributed across multiple data centers around the world.
This is going to require global collaboration among governments and the private sector.
“Governments and the private sector must work together to ensure this global network of infrastructure exists to support the development of AI, enabling people all over the world to participate in the creation of a common resource.”
LeCun is a strong advocate for open source foundation models. We have seen how successful the Internet and the Web have been over the last 35 years because of open source developments. If you are new to the concept of open source, be sure that you understand it.
“The future of AI is inevitably one in which free and open source foundation model will dominate. History shows that infrastructure software platforms always end up being open source. For example, the software infrastructure of the internet and the mobile communication networks are entirely open source.”
Two extraordinary examples of how open source collaboration have advanced global development through infrastructure are the work of the IETF and the W3C. Plus, there are all sorts of standards developed in collaboration by organizations throughout the world that we take for granted. The IETF is the Internet Engineering Task Force that defines the protocols that enable the Internet to function. The W3C, the World Wide Web Consortium, “develops standards and guidelines to help everyone build a web based on the principles of accessibility, internationalization, privacy and security”. These organizations have been operational for decades and produce work that is instrumental in our daily lives. Yet, very few people know about these groups. The remarkable thing is that anyone can join and participate in the process to have your voice heard.
LeCun stresses that AI cannot reflect only the values and perspectives of a small part of the planet. We all must advocate for training foundation models on data that represents the full spectrum of human languages and cultures.
A key aspect is regulation, which may be more challenging than any technical development.
“unifying the regulatory landscape so that the development and deployment of open source foundation model is not hindered.”
Think about the potential economic and social benefits of making AI accessible to everyone. We need some kind of global consensus on how to govern these powerful technologies. It's about finding that balance between encouraging progress and safeguarding our values.
LeCun's timeline that AI will surpass human intelligence within twenty years really drives home the point that we need to be proactive in shaping this future. And some leaders in the AI industry are saying this will happen within the next few years, though I think that’s mostly hype to drive up the valuation of their companies.
LeCun's talk underscores a critical message. The future of AI is not predetermined. It's being shaped right now by the choices we make, the projects we pursue, and the values we prioritize. There's a real sense of urgency about using AI to make a positive impact on the world.
Safety
But what about the risks? We hear a lot about the potential dangers of AI. Lecun addresses it head on.
“Foundation models must go through rigorous testing and red teaming.”
Red teaming is a process, usually internal to a company, where they try to identify vulnerabilities through prompting.
You might be wondering: couldn’t someone with malicious intent just ignore those safety checks and develop AI for harmful purposes anyway?
Yes, I think so. We need to figure out methods for combating that. But LeCun argues that open source platforms with their transparency and the scrutiny of the global community tend to be more secure.
You also might be wondering: if that's really going to happen within the next decade or two, shouldn't we be focusing all our energy on figuring out how to control something that could potentially be more intelligent than us?
Those are valid concerns and LeCun doesn't shy away from them. He believes that we can develop safety guardrails to guide their behavior and prevent harm.
“Those super intelligence systems will do our bidding and remain under our control. They will accomplish tasks that we give them, subject to safety guardrails. Guardrails will shape their behavior similarly to how inviolable laws would shape human behavior.”
Of course, even with societal norms and international law, we still see genocide and horrible acts of violence. But, with AI, we must do as much as we can with safety. It’s naïve to think that AI is not here to stay.
And this is where your interest in responsible AI becomes absolutely crucial. As we develop these incredibly powerful AI systems, we need to be thinking deeply about the ethical implications, the potential risks and the safeguards we need to put in place. This isn't just some abstract philosophical debate. It's a field that will be in high demand as AI continues to advance.
Our work & lives
LeCun paints a picture where we'll each have a team of AI assistants working for us.
“In the coming decade, AI will become pervasive. Everyone will have access to virtual staff of AI assistants at all times. They will help us in our daily lives, like a staff of human assistants…. They will provide easy access to knowledge in every language in the world.”
I’m a former librarian. For several decades, we have envisioned the Internet as the universal repository of all human knowledge. But what does it mean for global development when you can have conversations with an expert on any topic imaginable, in any language?
The dissemination of knowledge
LeCun's comparison of AI's impact to the invention of the printing press.
“It is often said that AI is enabling the next industrial revolution. I think the effect of AI on society may be more akin to the invention of the printing press and the wide dissemination of knowledge through printed material.”
Librarians, since the early 1990s, have been comparing the Internet to the invention of the printing press. Over the last 35 years, most scholarly information is now available online, though much of it still exists behind paywalls. The publishing process also has changed during this period. All books are designed mostly with Adobe InDesign software and produced as PDFs for printing, even those that are sent to high-end printing facilities for hardcover and paperback books. That has enabled a boom in self-publishing, though the quality of content varies greatly. More importantly, the digital process of book production means that all books published in the last few decades exist as ebooks in one form or another.
Libraries also have been digitizing archival materials that only existed as handwritten manuscripts and as documents never published. Likewise, in the early 2000s, Google embarked on a massive digitizing project in cooperation with major libraries to scan more than 40 million books.
“By amplifying human intelligence, AI may bring not just a new industrial revolution, but a new renaissance, a new period of enlightenment for humanity.”
But to get there, we need to adjust the challenges of global collaboration, ethical development, and ensuring that the benefits of AI reach everyone, not just a privileged few.
Call to action
Let's talk about what this means for you, someone who's just beginning to explore AI.
There will be a huge demand for people who can bridge the gap between the technical and the social, ensuring that AI is developed and deployed in ways that are both sustainable and equitable.
AI is complex and moving fast. It can feel overwhelming. But the key is to break it down and focus on what truly excites you.
What aspects of AI resonate most strongly with you?
Are you drawn to the technical challenges of developing safe and reliable AI systems?
Are you more interested in the ethical and societal implications of these technologies?
How do we ensure that AI, even at a super intelligent level, remains aligned with human values? How do we build in safeguards to prevent unintended consequences? These are some of the biggest challenges facing us today. And they're challenges that you have the opportunity to tackle.
Remember, you don't have to figure it all out today. AI is constantly evolving and there are many ways to get involved. It's more about finding your niche within this evolving landscape.