I started writing this post with pen in a notebook while sitting in a Toyota in Charlottesville, Virginia. My kid was at an appointment. I had an hour.
There’s a tweet going around that we’re closer to 2050 than to the year 2000. That means nothing to my 14 y.o. kid. It signifies everything to me. Twenty-five years ago, I had moved to Miami Beach and started a great job at the University of Miami. I always refer to those years as the prime of my life (though I know I’m lying to myself).
What about 25 years from now? Will I even still be alive in my mid-80s? I don’t worry about that. I have never worried about dying. Likely that’s because my father died suddenly when he was 37. Since my childhood, I’ve always known that death could come at any moment.
Any thoughts about our AI future in 2050 is pointless speculation. The now and soon are what’s relevant.
I use ChatGPT throughout the day. (Shout out to Gemini Deep Research. If you’re not using Gemini for deep research, then you’re missing out.) I use ChatGPT more often in a day than I have ever used a search engine or almost any social media site. I must admit that I’m big consumer of YouTube video. I love YouTube! There’s something extremely relevant about Google that their products that are core to my life: gmail, Gemini Deep Research, YouTube, NotebookLM.
Yet, despite my use of Google products, I feel locked into the OpenAI ecosystem for most of my chats due to setting up projects in ChatGPT and that it has memory of the essential work I’m developing.
I usually have a tab in my browser opened to ChatGPT where I can jump to any moment for an insight or for help in thinking through an issue or for a quick overview of a topic or to answer a question.
The tech giants have cornered the market on generalized use of AI.
In the last few weeks two essays have caught my attention. The first is Sam Altman’s The Gentle Singularity . The second essay, published earlier in 2025, is Stop treating ‘AGI’ as the north-star goal of AI research by a team of 16 AI researchers. (The primary authors are Borhane Blili-Hamelin and Christopher Graziul.) I recommend that you read both essays in light of each other since they represent fundamentally different philosophies about the future of AI.
Over the last year, I often told the students in my college courses that technology seeps into our lives. As Altman casually points out, “In some big sense, ChatGPT is already more powerful than any human who has ever lived.” We now take for granted that we have multiple systems (ChatGPT, Gemini, Claude, Perplexity, etc.) that are more powerful machines than anything we’ve ever encountered. Just a few years ago, those capabilities would have been doubted by most and shocking to almost everyone. Now, we just shrug our shoulders.
If you’re paying attention to AI developments, you’ll know 2025 is the year of AI agents, though the term AI agent has been so corrupted by marketing speak. A recent video with Andrew Ng has him stressing that we should stop dissecting the term AI agent and refer to agentic systems. It’s an excellent video. Watch it if you’re interested in agents, uh, agentic AI. Even if you’re not interested in agentic AI, you should watch the video. Andrew Ng always has insights.
Lots of companies are working on robots. I’m trying not to think about robots. I admit that I’m not ready for humanoid robots. But when I look at any parking lot and see the amount of money that Americans spend on their cars, then (well, hell yeah) Americans will be buying humanoid robots (on credit). There’s likely to be robot dealerships (like car dealers) showcasing all the latest models. But these robot retailers will be more snazzy like the Apple Store than the Chevrolet dealer. (Inevitably, some dude will wonder if his humanoid robot could drive his Ford F-150 someday; there’s an odd type of American who might be more comfortable with a humanoid robot driving a truck than with self-driving vehicles. Sigh.)
The 2030s
Sam Altman gives us a reassuring truth: “In the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes.”
I avoid the word singularity. It means nothing to me, and means nothing to most people. It’s jargon.
Of course, the 2030s will be different from the past, just as the 1930s were different from the preceding decades. It’s easy to see change in hindsight, but very difficult when you’re leaving through it.
The concern is that the next decade will be radically different than anything we have known. We’ve been here before. In the 1990s, those of us working deeply on the Internet and the emerging Web knew that huge changes were coming, though no one knew quite what shape would emerge in the 2000s.
Jobs did disappear. New types of jobs were created. Will that pattern hold? I’m not so sure.
Sam Altman:
There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.
The word abundance is getting tossed around a lot. I’ve recently started listening to the Ezra Klein book of that same word.
Part of me loves Altman’s vision of the gentle singularity. But here are some essential questions that we must consider:
Who guides abundant intelligence toward meaningful ends?
How do we ensure powerful cognitive tools align with genuine human flourishing?
In what ways might these tools subtly diminish critical human faculties such as reflection and deliberate thought?
These questions directly shape my current collaborative effort with a business partner. (More on that in a future post.) Together, we envision AI as a thought partner designed not to shortcut thinking, but to encourage slow, reflective inquiry. We believe that how we think shapes who we become.
AI Safety & Alignment
AI safety gets a disparaging reaction from some people on X. Admittedly, some proponents of AI are so overly cautious. And then there are the anti-AI folks who pop up in any conversation about AI art. Altman gives AI safety some credence, “We do need to solve the safety issues, technically and societally”. Let’s look at the essay, Stop treating ‘AGI’ as the north-star goal of AI research. (See earlier link above.)
Is the focus on AGI not only counterproductive but also dangerous?
The illusion of consensus. The terms AGI and superintelligence create a false sense (among the uninformed public) that everyone in the AI industry agrees on what society should be trying to accomplish with AI. Of course, if you’re involved with AI, then you know there’s little agreement on even what those terms mean other than an AI that is more capable in every domain than any human. That’s a really broad statement.
Specificity over Generality
We need more AI-enabled specialized tools that have clear, measurable, and context-aware goals. The future of AI cannot be built in silos controlled by a very small number of corporations. The greatest risk of a single "north-star" goal is the homogenization of research and the consolidation of power. OpenAI, Anthropic, and others play a very important role in the foundation of what we do with AI. In many cases, they will provide the optimal experience for the majority of people. In many other specialized cases, smaller players will focus more deeply on providing a better experience. How do we reconcile visionary optimism for rapid technological progress with disciplined caution to ensure AI genuinely enhances human lives? That requires a commitment to open standards and decentralized architectures that ensures domain experts are not just passive recipients but active participants in shaping how AI is integrated into our digital lives.
The divergent viewpoints table was created with the assistance of Gemini 2.5 Pro. Formatting was created with the assistance of Datawrapper.
The next time you find yourself talking about AGI, take a breath. Think about a very specific use case of AI that does not yet exist. Think about how you would measure how well AI accomplishes that goal. Find a specific use case of AI that really energizes you. Or, identify an aspect of society or our lives that you want to be sure does not get lost as AI advances. Those are the things you work on. That’s how you craft a responsible path to AI.