Lee Sanders: a more human future of healthcare
Soul of Silicon Valley Series. Listen to the Episode:
Lee Sanders, MD
Dr. Lee Sanders is a general pediatrician and Professor of Pediatrics at Stanford University School of Medicine, where he serves as Chief of the Division of General Pediatrics. A nationally recognized leader in health literacy and equity, he directs the Stanford Health Literacy Lab and holds joint appointments in Health Policy, Epidemiology, and International Studies. His research focuses on partnering with families to reduce disparities in child health and education, especially for children with complex medical needs. Dr. Sanders leads multiple federally funded studies, advises national health agencies including the CDC and FDA, and co-leads Stanford’s Population Health in Schools Lab. Fluent in Spanish and deeply committed to systems-based care, he also teaches at Stanford’s Human Biology Program and d.school.
Building a More “Human” Future of Healthcare with Dr. Lee Sanders
– Summary by Maya Lockwood
Over the summer, I’ve been exploring the “Soul” of Silicon Valley by talking with deep tech leaders whose work sits at the intersection of innovation, humanity, and healing. Few voices embody the work that blends the intersection of innovation, humanity and healing more powerfully than Dr. Lee Sanders.
A nationally recognized expert in pediatrics, population health, and health equity, Lee wears many hats—physician, professor, policy thinker, and AI design lab director at Stanford. But perhaps most importantly, he’s a trusted healer and a man who’s walked through his own journey with illness, emerging with wisdom and compassion that can’t be taught.
We sat down to talk about trust in healthcare, designing AI for humanity, the role of community healers, and what it means to build systems of care worthy of the people they serve.
The Crisis of Trust—and a Path Forward
We are living through a global revolution in artificial intelligence—and a parallel crisis of trust.
“Historically, physicians and teachers have been among the most trusted professionals,” Lee shared. “But that trust is eroding. Especially over the past five years, we’ve seen it disappear fast.”
To restore it, he outlined three essential values—ones that are as critical to healing relationships as they are to designing new technologies:
Authenticity: “We must show up as human. Not all-knowing experts. Sometimes saying ‘I don’t know’ is the most powerful way to build trust.”
Accountability: “What happens after the visit matters. Following up, being dependable, delivering on promises—that’s how trust is earned.”
Specificity: “When I ask patients what they’re most hopeful or worried about, I write it down. In their words. And I honor that above the labs, the imaging, the algorithm.”
For Lee, these aren’t abstract values—they’re the foundation for a more human health system. And for any AI tool to succeed, it must reflect and reinforce them.
Redefining Who the Experts Are
One of the most powerful parts of our conversation centered on who we consider "experts" in healthcare—and who we often leave out.
“We tend to assume doctors and sometimes nurses are the only people qualified to solve healthcare challenges,” Lee said. “But what about the promotoras, the immigrant women caring for elders, the spiritual leaders, the home caregivers? They heal too.”
At Stanford’s d.school, Lee brings these often-overlooked caregivers into the innovation process—not just as interview subjects, but as co-designers.
“They offer solutions we’d never imagine in a design sprint. If we’re serious about building systems that serve real people, we need their voice at the table from day one.”
Burnout, Meaning, and the Fourth Aim
The healthcare world has long talked about the “Triple Aim”: improving care, reducing cost, and enhancing population health. But Lee reminds us that a Fourth Aim is equally vital: caregiver well-being.
“Burnout among nurses, doctors, and home health aides is widespread—and dangerous,” he said. “Tech should reduce that burden, not add to it.”
So how do we do that?
Lead with grace. “We all fail, we all get rejected. Meaning is found when we offer ourselves and each other compassion.”
Create space for reflection. “We sacrifice too much time and humanity in the name of efficiency. People need time to make sense of their work.”
Design for connection. “There’s no app for meaning. It comes from safe, brave spaces—circles of trust where people can be seen and heard.”
In Lee’s view, technology should return time to caregivers—not take more of it away.
Guardrails as Creative Constraints
When we touched on AI policy, Lee lit up.
“Guardrails shouldn’t be seen as barriers—they’re creative constraints. In design, we thrive when we have boundaries to work within.”
He called for a dynamic, inclusive governance system—one that updates existing structures like CMS and The Joint Commission for the AI era.
But more importantly, he argued that the people most impacted by health inequities must help define those guardrails.
“Too many AI tools are designed by and for the affluent. But if you build for low-income, high-need populations—you’ll build systems that work for everyone.”
A Personal Story That Changed Everything
Toward the end of our conversation, Lee opened up about a life-altering moment that made this work deeply personal.
“A couple of years ago, I started having a small issue swallowing. My doctor sent me for a scope. When I woke up, the doctor said: ‘We think it’s esophageal cancer.’”
He Googled it. The five-year survival rate? 21%. Every article pointed to palliative care.
“I sat down with my wife. We held each other and cried for an hour.”
He immediately called every cancer expert he could. After rounds of tests, surgery, chemo, and radiation, his odds improved. Today, he’s in remission.
“But even with every advantage—education, connections, insurance—I still experienced confusion, delays, and miscommunication. What about the people next to me in the waiting room who didn’t speak English? Who didn’t look like me?”
Lee’s conclusion was clear:
“This is not a health system. It’s not always healthy, and often not very caring. And it’s failing the people who need it most.”
What He’d Tell the World
I asked Lee what one message he’d whisper to every policymaker, healthcare investor, and tech founder in the world.
His answer was simple and profound:
“Design with—not for—people. Especially the most vulnerable.
Ask: who’s not in the room? Bring them in.
And if I could scale one thing globally, it wouldn’t be AI or genetic editing.
It would be circles of love. The hugs that held me when I was sick.
That’s what healed me. That’s what makes a system human.”