Advances in artificial intelligence seem to make the headlines daily. Whether it's for the good of humanity, or another step towards our destruction, or (let's assume) somewhere in between, A.I. is here, and it can be a bit overwhelming to grasp.
Publicly available generative AI tools are allowing people to make and distribute reports, images, audio, and video — all kinds of things that are, ultimately, not real. How can we process it all and separate what's fake from what's real? After all, we've already been struggling with that without these new tools. Will we reach a point when we simply can't tell the difference?
This is a question on the mind of Harris Eyre, a fellow in brain health at Rice University's Baker Institute for Public Policy. He's calling for what he calls a "neuroshield" — a combination of educational tools, regulatory protections, and a code of conduct meant to bolster our brains' operating systems against misinformation. He explains in the audio above.