Making sense of the AI we're being sold
MP #13: The AI we're wrestling with is not the kind we were told to watch out for.
I was an 80s kid; all my life there have been different forms of AI in many of the movies I’ve seen and the stories I’ve read. Most AIs, whether they ended up serving for the betterment of humanity or helping foster its destruction, have been presented as having some kind of consciousness or intentionality. But life rarely plays out the way it does in movies or in stories, and it seems that’s the case with our current AI reality.
Except for a few isolated people who’ve given in to a bit of fantasy, it’s abundantly clear that the current generation of AI tools have nothing remotely resembling consciousness. They don’t have any inherent intention other than that which they were trained to have. They are mostly “large language models” (LLMs), which means they’re trained on large bodies of text. They are very good at making connections between the text you submit to them and the text they’ve been trained on.
Why am I paying so much attention to these new tools? For one thing, many of the people I’ve respected personally and professionally for years have found these tools to be useful in a way that’s much different than any previous automated tools. There’s a world of difference between searching among mostly human-generated resources with search assistants, and using a tool that summarizes what it’s been trained on and synthesizes information based on what it’s seen. People who are using these tools aren’t relying on primary resources anymore; they’re letting the AI access the primary resources, and working with what these tools generate from that raw material.
I’m also the father of a twelve year old. He’s going to go through high school at a time when AI tools will be able to do a reasonable job completing most of his homework assignments. Even if he doesn’t use these tools, many of his peers will. We’ve had specialized tools that math teachers have had to wrestle with for a while now, but this seems like the first time that almost any homework assignment could be done by a generalized AI tool, and only be called out by the most diligent teachers. These tools are almost certainly going to keep developing faster than educators can process how to work with them, alongside their students.1
Here’s the question I’ve been wrestling with, as I watch the much-hyped release of tools that are clearly useful to a lot of people, but also very wrong a lot of the time. I’ve been told not to worry because you’re supposed to use them for assistance, and fact-check or otherwise verify everything that comes out of them. The people I know who are experienced enough and disciplined enough to do that are making great use of these tools already. But I’m also watching them be rolled out increasingly quickly to a wider and less careful audience. We’ve seen that people are not that great at fact-checking. What happens when these tools are widely available, given too much trust by people, and people start building things in the real world based on taking the tools’s suggestions at face value? The simple example I shared with my son is:
Imagine you ask the AI tool to help you build a gaming shed in the back yard, and it gives you plans and you build it. Then it collapses under the first heavy snowfall because the VPN you were using made it think you were building in a warm climate?
Yes, it’s on us to build in a way that meets local code. But the people pushing these tools are saying “Make sure you build to local code” much more quietly than they’re shouting “This is the most amazing thing you’ve ever seen!” And this is saying nothing about the people who will use these tools to advise them about industrial processes, medical interventions, personal relationships, political negotiations, and so much more. We shouldn’t be trusting those who stand to profit from rapid early adoption of these tools to put the right guardrails around them, or pay enough attention to the negative externalities of their use.
I remember the early to mid 2000s when the founders of all the major social media sites told us they were building tools that would connect all of humanity. I’m grateful for the connections I’ve been able to make through social media. But I’m also aware of all the harm that’s come to so many people through social media, and through the concentration of wealth that an app-based economy has helped facilitate. I can’t help but think that the people telling us how great and beneficial these new tools will be are quite similar to the people who told us all how much better off we’d be when everyone in the world was connected through their platforms.
AI tools are not likely to pick up a bunch of laser guns and start shooting at us. But they’re absolutely going to make a bunch of suggestions to a lot of people that would be harmful if implemented.
I don’t have answers about what we should do. I’m looking for the right ways to think about these tools and how they’re being developed, discussed, and released to the world.
I’m not concerned about “cheating”. We don’t want to raise this generation to avoid using AI tools. We should want to help them understand how AI works, and how it fails. We should help them understand how to use it to their benefit, and to the benefit of others, and most importantly how to avoid accidentally causing harm by following its suggestions or advice.