Descriptions:
Joe McCormack, a principal software engineer at Baby List and Harvard CS graduate, lost most of his central vision before starting college due to Leber’s hereditary optic neuropathy, a rare genetic disorder. In this episode of How I AI, he shows host Claire Vo how he uses Claude Code paired with Wispr Flow voice dictation to build custom Chrome extensions that make his work and personal life more accessible—and demonstrates why AI tools are, in his view, rapidly closing the gap between sighted and visually impaired engineers.
The centerpiece demo is an image-description Chrome extension McCormack built himself: pressing Ctrl+Shift+D on any message pops up an AI-powered description of embedded images, with follow-up questions supported, allowing him to get visual context without asking colleagues to describe images manually. He also shows how he uses Claude Code’s Ctrl+G prompt editor—which opens prompts in a standard text file rather than the terminal—because standard text editors are far more compatible with screen readers than Claude Code’s default interface. This accessibility workaround is useful for any developer who prefers composing longer, more structured prompts.
McCormack walks through his skill-building workflow: after building two Chrome extensions, he had Claude analyze both and extract shared architectural patterns into a reusable Claude skill, allowing subsequent extensions to be scaffolded in a fraction of the time. He leads the AI enablement program for software engineers at Baby List, and the episode blends practical Claude Code technique with a compelling firsthand account of how multimodal AI has changed daily life for people with visual impairments.
📺 Source: How I AI · Published February 16, 2026
🏷️ Format: Workflow Case Study







