The Next AI Revolution Is Driverless
A conversation with computer scientist Dr. David Lindell
I tried Meta’s virtual reality (VR) headset last year and almost threw up. The lag between turning my head and the screen catching up made my brain scream that something was wrong—that uncanny valley feeling where technology is close enough to reality to be unsettling, but not close enough to be convincing.
That disorienting experience? It’s the same problem affecting whether your job involves competing with a robot, and whether the glasses you wear will become as essential as your phone.
Dr. David Lindell, a computer vision researcher at the University of Toronto, is teaching machines to see the way humans do. Not just record images like a camera, but actually understand what they’re looking at: the difference between a plastic bag and a kid running into the street. A puddle versus a pothole. Your face versus a stranger’s, even in the dark.
Companies are pouring tens of billions into making this real in the next five years. When they succeed—not if, when—the economics of transportation, employment, and how you navigate cities will transform completely. And most of Gen-Z has no idea it’s happening.
AI can already write essays and generate art, but it still struggles to identify a stop sign in the rain. And your phone’s facial recognition fails if you wear different glasses.
The missing piece? Teaching machines to actually see, not just record. And that gap is finally closing, which means industries built on human drivers, retail workers, and caregivers are about to face the same disruption streaming brought to CDs and DVDs.
When you look at a cat, your brain instantly understands it’s alive, soft, probably wants to be left alone. You know not to grab it by the tail or approach when its ears are back. Cameras just capture flat images with no comprehension.
A LiDAR sensor shoots out lasers measuring bounce-back time and color, then builds a model based on that data to interpret what it “saw.” But those lasers behave wildly differently depending on what they hit—light scatters and creates noise when bouncing around corners, reflects differently off shiny versus matte surfaces. All of that affects how accurately the machine interprets what’s actually there.
“We’re teaching machines to understand the physics of how light behaves in the real world,” Lindell explained. “Not just collect data, but interpret what it means.”
That shift from recording to understanding unlocks everything else. It’s happening right now in three areas that will directly affect your job prospects and whether owning a car will even make economic sense.
Remember that nauseating VR headset? Meta spent $46 billion since 2019 trying to fix it. Most failed spectacularly. Mark Zuckerberg’s metaverse presentation became a meme. The company’s stock crashed.
But something changed. Meta’s Ray-Ban smart glasses, second generation, actually work. They look like normal sunglasses, have a camera built in, and use computer vision AI to understand what you’re looking at in real-time. Point them at a landmark and they’ll tell you what it is. Look at foreign text and they’ll translate it. Lost in a new city? They’ll guide you without pulling out your phone.
Meta’s betting that in five years, wearing AI-powered glasses will be as normal as wearing AirPods. If they’re right, the smartphone industry, currently employing hundreds of thousands and generating over a trillion dollars annually, faces disruption. Lindell noted that Meta’s breakthroughs in traditional VR and augment reality (AR) laid the groundwork for these smart glasses to actually work. The trajectory, from “this makes me throw up” to “this is actually useful” in two years, suggests this isn’t speculation.
While everyone debated whether self-driving cars would ever work, Waymo deployed them. Right now, in San Francisco, Los Angeles, and Phoenix, you can summon a driverless car. Over 150,000 people do this weekly. The cars work.
I asked Lindell what changed. “Two things happened,” he explained. “First, the computer models got much better at handling edge cases—tail risks: a pedestrian stepping into the street while texting or a plastic bag blowing across the road you ignore versus a box you avoid. We use ‘digital twin’ models—virtual replications of real environments that let autonomous systems train on extreme conditions they haven’t physically encountered. Snow, heavy rain, fog. The AI responds to conditions it’s never actually driven in because it learned from the digital twin.”
“But the second piece is more fundamental,” he continued. “We’re teaching AI to understand the physics of how light behaves with LiDAR sensors—accounting for things like glare, reflections, how light scatters differently off surfaces. This lets the algorithms process data in ways that seem almost impossible. The car can essentially see past corners by detecting reflected light, or see through smoke and fog by understanding how light diffuses.” When you’re driving in heavy fog, that’s not just useful—it’s the difference between the technology working or failing completely.
Waymo’s vehicles have driven over 20 million fully autonomous miles. But here’s what nobody’s talking about: the economics of autonomous driving.
A human Uber driver might make $25-30 per hour after expenses. Waymo doesn’t pay a driver, so operating costs could drop by as much as 60-70% in the future. Which means massively cheaper rides for you and higher profits for Waymo.
Now multiply that across every transportation job. There are 3.5 million truck drivers. Millions more rideshare and delivery drivers. Most will face obsolescence within your working lifetime, not because the technology might work someday, but because it already works and is being deployed city by city right now.
Research has shown Waymo’s autonomous vehicles had 91% fewer serious injury crashes, 79% fewer airbag deployments, and 80% fewer injury-causing crashes compared to human drivers over the same distance waymo—they’re not just as safe as humans, they’re demonstrably safer.
In five years, autonomous vehicles will likely operate in most major cities. In ten, human-driven Ubers might be the exception. The world where you need a driver’s license, own a car, or pay for parking? That world has an expiration date.
If you’re choosing a career, this should terrify or excite you depending on which side of the disruption you’re on.
This is where Lindell gets most excited, and where his research directly affects whether you’ll compete with a humanoid robot for entry-level work.
“We’re getting humanoid robots to see as much as possible,” he told me. “That means we are seeing humanoid robots being placed in real-world environments—not controlled labs, but actual messy human spaces. Letting them interact with humans as a way of collecting data.” He paused, energized, stating that we’re probably five to seven years from humanoid robots being commercially viable. Not as toys, but as genuinely useful machines operating in environments designed for humans.
For decades, humanoid robots were stuck doing repetitive tasks in controlled environments. But they couldn’t navigate a cluttered apartment, couldn’t hand you coffee without crushing or dropping it, couldn’t understand that when you turn your back you’re not leaving, just grabbing something.
Computer vision is changing that. Companies like Boston Dynamics, Tesla, Unitree, Ubtech, and Figure AI are building humanoid robots that walk up stairs, manipulate objects of different weights, and navigate crowded spaces. The breakthrough? Robots now learn from visual data the way humans do. Show the robot thousands of interaction examples, and it learns patterns.
In ten years, you might see humanoid robots in hospitals, warehouses, elderly care facilities, restaurants.
Now the uncomfortable question: What jobs are safe from these new technologies? If a robot can see well enough to navigate your apartment, understand which dishes are fragile, and learn from watching you clean once, what happens to home care work? Elderly care? Service industry jobs currently employing millions of Gen-Z workers?
The economics shift. A home care robot might cost $30,000 upfront but work 24/7 for years. A human costs $15-20 per hour, needs breaks, gets sick, quits. From a pure cost perspective, which would you choose if you were running a nursing home?
Amazon already knows the answer. A leaked internal memo revealed the company expects to automate more than half a million warehouse roles by 2033. Not because they want to eliminate jobs, but because the unit economics works to save up to 30 cents for every item Amazon stores and ships from its warehouses.
This isn’t anti-technology. It’s basic economics: when technology gets good enough and cheap enough, companies use it. The question is whether you’re building skills that complement these technologies or compete with them.
Think about 2015. Nobody was taking Ubers everywhere. Instagram was just photos. TikTok didn’t exist. You couldn’t ask your phone complex questions. That was ten years ago.
Now imagine 2030 when machines see and understand the physical world as well as humans. You might wear AI glasses that translate signs in real-time, overlay navigation, recognize people you’ve met, identify plants and buildings instantly. Your commute could involve autonomous vehicles as standard as Uber—except cheaper. Humanoid robots might handle tasks in coffee shops and grocery stores.
Every example I just gave has a working prototype or early deployment right now. The question isn’t whether this works, it’s how fast it scales and whether you’re positioned to benefit from or be displaced by it.
I asked Lindell what advice he’d give Gen-Z about keeping up.
His answer was simple: “Just use the tools. Constantly. I was playing with Google’s new Gemini 3 model last week. I’m just seeing what it can do, which areas it got better at, what’s changed. When you engage with technology incrementally, adapting feels natural rather than overwhelming.”
You don’t need to become a AI scientist, but you do need to understand what’s changing and use these technologies enough that they don’t feel foreign when they become standard. In ten years, you’ll live in a world where machines see and understand physical reality. Where AI-powered glasses are as common as smartphones. Where autonomous vehicles are the default. Where humanoid robots operate alongside humans in your local coffee shop.
That world is being built today. The question is whether you’re watching it happen or understanding it well enough to position yourself on the side that benefits. Because this technology will reshape employment, transportation, and urban life whether you’re ready or not.
A special thanks to Dr. David Lindell for taking the time to speak with me. You can find him on X here








