Houston Matters

Can we rewire our brains to separate A.I.-supported fiction from fact?

Houston-area neuroscience expert Harris Eyre thinks we can, with the use of what he calls a “neuroshield.”


FILE – The logo for OpenAI, the maker of ChatGPT, appears on a mobile phone, in New York, Tuesday, Jan. 31, 2023. (AP Photo/Richard Drew, File)


To embed this piece of audio in your site, please use this code:

<iframe src="https://embed.hpm.io/458955/459041" style="height: 115px; width: 100%;"></iframe>

Advances in artificial intelligence seem to make the headlines daily. Whether it's for the good of humanity, or another step towards our destruction, or (let's assume) somewhere in between, A.I. is here, and it can be a bit overwhelming to grasp.

Publicly available generative AI tools are allowing people to make and distribute reports, images, audio, and video — all kinds of things that are, ultimately, not real. How can we process it all and separate what's fake from what's real? After all, we've already been struggling with that without these new tools. Will we reach a point when we simply can't tell the difference?

This is a question on the mind of Harris Eyre, a fellow in brain health at Rice University's Baker Institute for Public Policy. He's calling for what he calls a "neuroshield" — a combination of educational tools, regulatory protections, and a code of conduct meant to bolster our brains' operating systems against misinformation. He explains in the audio above.

Troy Schulze

Troy Schulze

Producer, Houston Matters

Troy Schulze is a producer for Houston Matters. He also produces the podcast Party Politics and the digital video series Skyline Sessions. Schulze has been working as a writer and producer in digital media for over 20 years. He has received three Emmy nominations for his work on the TV...

More Information