Same wavelength on this idea. The screen reader is painful to watch and AI is much better at summarizing than our manual heading tags. Listening to current state screen readers are the equivalent of calling someone on a rotary phone and messing up the last number while having a commercial disregard the appropriate volume level you've set for your tv and scream everything that they say in their ad at a jarringly loud level of bark... I don't get who signed off on it, it's awful. Why do we have to go through every single h1 before we get to the actions? It's not the same experience, why are we insisting on forcing the user to navigate the same way? We can do better. Well we can make the machines do better for us which is even better.
Awesome Episode mate! Superb.
Glad you enjoyed it! ~Brendan
Thanks for always asking top notch questions Brendan! Such great episodes!
You're welcome. Great to hear that you're enjoying the show! ~Brendan
[4/9 1:56 PM] Shelton, Dana
there's gotta be a way to use copilot to give us better summaries of the page in a screen reader experience
Same wavelength on this idea. The screen reader is painful to watch and AI is much better at summarizing than our manual heading tags. Listening to current state screen readers are the equivalent of calling someone on a rotary phone and messing up the last number while having a commercial disregard the appropriate volume level you've set for your tv and scream everything that they say in their ad at a jarringly loud level of bark... I don't get who signed off on it, it's awful. Why do we have to go through every single h1 before we get to the actions? It's not the same experience, why are we insisting on forcing the user to navigate the same way? We can do better. Well we can make the machines do better for us which is even better.