|
I want to start today with a story about an old Atari game. (Non-nerd-out types: Bear with me. It goes somewhere useful.) (Computer science nerds: Patience, please, with a greatly simplified recounting of this story.) AI Curiosity Have you been exploring, playing, working, or wrestling with AI lately? Some of my clients and colleagues use it daily, some are just now exploring, and some won’t touch it at all. In my own trying and learning, I’ve had moments of dread, disappointment, confusion, delight, overwhelm, and occasional awe. I wanted to get a better grasp of where we are and where we’re going by understanding how we got here. I turned to my favorite mode of learning and found a podcast to answer my questions: The Last Invention, hosted by Greggory Warner, and produced with Megan Phelps-Roper. This artful narrative shares stories of child geniuses, World War II code-breaking machines, “Doomers” who want it all to stop, and “Accelerationists” who want it all to go faster. But the key moment that changed it all was a shift in philosophy. Is success having all the right answers? Or is success showing up empty and learning along the way? The Machine That Beat a Human In the path to achieve artificial intelligence, there were essentially two camps of thought. One believed the way to achieve machine thinking was a system with all the information. The more it knew, the smarter it would be. For a while, that approach seemed to be the best way forward. Computers kept achieving more feats of wonder, ultimately achieving what some thought impossible. In 1997, IBM’s Deep Blue beat the world chess champion, Garry Kasparov. For a moment, it looked like we had reached something like intelligence. Thinking. Problem solving. From a computer. But though chess is incredibly complicated, it has a finite number of rules, moves, pieces, and options. It is a closed system. This kind of “thinking” machine couldn’t make the leap to new systems. In order to “win” or make the right moves, it had to already know the system. It had to already know all of the rules, moves, pieces, and options. This is fine for games, but we live in a complex world of open systems. Ecology, cities, bodies, internet, your office. The Machine That Learned to Learn Parallel to these events, another team took a different approach. What if we gave no right answers, but gave the system a huge amount of computing power and let it learn from its actions? This idea of computers learning using systems like the brain’s neural networks was proposed by Alan Turing in the 1950s. For a long time, this approach looked like nothing. Surely an idea so old wasn’t useful in the 21st century. But a little startup in 2010 kept going with the idea. They started small with relatively simple games from the 1970s and 1980s. Yes, the epitome of a closed system, but they didn’t teach the computer the rules, moves, pieces, or options. Instead of giving their machine systems any information at all, they put it in front of an old Atari game, Breakout, with a single directive: maximize points. No rules. No answers. No playbook. Just a world to figure out. At first, it looked like chaos. Random movements, no logic, no apparent goal. Mistakes. Blunders. Game Over. And then something shifted. It figured out how movement created outcomes. It figured out what stood between it and what it wanted. And eventually it played the game so perfectly that it could predict exactly where the pieces would appear and respond before they arrived. It won the game with astronomical scores. Closed system conquered. Next were more complex games like Go, and then on to open systems, such as datacenter power use and protein folding. It understood and gave valuable suggestions wherever it went. It didn’t succeed because it had the answers. It succeeded because it kept returning to two questions:
By answering these questions, it became clear that there was a gap between the two, and the path forward, or “right answer,” became obvious. |
|
|
Why This Matters for the Rooms You’re In This is exactly the loop at the heart of Narrative Thinking. The framework I’ve used to evolve leaders’ speakership. See the ‘here right now’ (Just like all of the stories, novels, movies, and shows we consume. Characters start in one context. Characters want something different. Something happens or changes or grows or ends.) For leaders in a room, this looks like not coming in with a conclusion already decided. Arriving with genuine curiosity about what’s in the room. Being open to where the story of the moment will lead. Most of us were trained for a different approach. The goal was to have the right answer ready. “A” students knew all the right answers. Preparation meant knowing more than anyone else in the room. Expertise was the whole job. And then, for all of us, the work grew. The systems got more complicated. We started working with people who brought competing priorities, incomplete information, and their own sense of what the right next step even was. Whether caused by our own growth or the complexity of the world, the right-answer model has run out of steam. What those rooms actually need is someone who can stay curious in real time. Who can hold a question without clutching tight to their answer. Who is willing to keep asking, “What’s actually here,” and update based on what they find. That’s a learnable practice. It is something that can be developed, deliberately, the same way any other leadership capacity develops. That’s what Narrative Thinking is. What’s the setting? Who are the characters and what do they want to have happen? Are we a main character in the story or a supporting figure? This is what we work on at The Speakership Lab. What struck me about this piece of AI history is how relatable it was to my own learning process: observing, absorbing, inferring, asking, then seeing new possibilities. It turns out the most sophisticated thing any intelligence can do is stay genuinely curious about what’s actually in front of it. That’s not a new idea. It’s just one we keep having to relearn. |
|
|
Questions in a “To Go” cup A few questions worth sitting with this week: Where are you walking into rooms with the answer already decided? If something here is landing somewhere specific, a conversation you’ve been circling, a team that isn’t moving the way you need it to, reply and tell me about it. I read every single response. That’s what this newsletter is here to do. Good to be in your inbox. You can expect me here on the first Wednesday of every month. Oh and stay tuned for another exciting announcement next month! — Margaret Watts Romney |



