Transcript: Business Book of the Year — Author Parmy Olson on the rise and risks of AI

This is an audio transcript of the Behind the Money podcast episode: ‘Business Book of the Year — Author Parmy Olson on the rise and risks of AI’
Michela Tindera
Hey there Behind the Money listeners, Happy New Year. A few weeks ago, the Financial Times and Schroders gave out their prize for the 2024 Best Business Book of the Year. It was a tough competition. The group of books that made the shortlist covered topics ranging from the technology revolution inside the US military to the management practices that have shaped the world’s corporations. But the book that took home the top prize is about a topic that has dominated the news in 2024 and will no doubt continue on in 2025. That is artificial intelligence. The book is called Supremacy: AI, ChatGPT, and the Race That Will Change the World. The book’s author is Parmy Olson. She’s a columnist at Bloomberg Opinion and previously a tech journalist at The Wall Street Journal and Forbes.
My colleague Andrew Hill is the FT’s senior business writer. He’s been in charge of running the book award since 2005. Back at the beginning of December, Andrew sat down with Parmy in our London studio to discuss the themes of her book. What you’re about to hear is an abridged version of that conversation. I hope you enjoy it.
Andrew Hill
I’m delighted to be joined by Parmy Olson, author of Supremacy: AI, ChatGPT and the Race That Will Change the World. We’re talking only hours after Supremacy was crowned FT and Schroeder’s Business Book of the Year for 2024, the 20th title to win the FT award since we launched it in 2005. Parmy, welcome.
Parmy Olson
Thank you. It’s great to be here.
Andrew Hill
And congratulations.
Parmy Olson
Thank you.
Andrew Hill
Just outline, for those who haven’t yet read it, the argument of the book.
Parmy Olson
Yeah. So really, this is a book about the concentration of power in technology and AI. I wanted to write it pretty soon after ChatGPT came out about two years ago. I was enthralled with this new tech. Here’s something that was above and beyond anything we’d seen from Siri and from Alexa. But I felt like there was something going on behind the scenes that I thought was important for people to know about, which was this kind of battle for control. And I thought, there’s two really important people in this story, and one of them is Demis Hassabis, who’s the founder of DeepMind, which is now owned by Google. And the other is Sam Altman, who’s the founder of OpenAI. And both these men have been trying for years to eventually build artificial general intelligence, which is this theory of a threshold of AI which surpasses our own cognitive abilities as humans. And their sense was that once they reached that, then we would solve so many of our current social ills and problems.
As they went along their journey, however, those quite utopian, almost humanitarian ideals kind of faded into the background as they aligned themselves more and more with two very large technology companies. And the point I wanted to get across in the book is that, first of all, there is a problem with this absence of proper governance and regulation of large tech companies. They’re so big, they’re almost untouchable now. And the other was to make the point that even these founders, these two very important people, they knew, they saw that there was something concerning about large companies controlling this technology in a regulatory vacuum. And they both tried to put in governance structures to separate the technology a little bit and give it proper oversight. And both of them failed to do it.
So I wrote the book almost as a little bit of a warning call about AI and why we need to have proper regulation of the technology, particularly as it becomes increasingly controlled and steered by just a handful of not just companies, but a handful of people.
Andrew Hill
Right. And this was a hypothesis, if you like, that had already emerged from your work as a columnist at Bloomberg, I imagine. Was there anything that you learned as you were writing the book that was unexpected, either about the two central characters or about the wider risks that you lay out in the book?
Parmy Olson
I think as I was really exploring the risks, it’s almost like as soon as I came up with one risk, there was another risk. And it’s almost like because artificial intelligence is kind of being woven into so many potential parts of our lives, not just business, but education, healthcare, culture, there are so many potential ways that we could pay a price, whether it’s the erosion of our critical thinking skills — you know, you’ve got a whole generation of kids going to school who are using and relying on these tools to help them do their homework. And then it’s sort of if maybe the teachers turn a blind eye to it. And then they go into the world of work where their entry-level roles are also working with these large language models and doing the thinking for them. What does that mean for that next generation of professional workers and how do we train them up? How do they become the next senior-level managers? So things like that, I just kind of found all these different avenues, which is something that I explore in the last part of the book.
There were some things about the main characters, of course, that surprised me, learning about the very different structures of OpenAI and DeepMind, learning about Demis Hassabis and this kind of spiritual background that he had, that there was this, you know, potential interest in maybe finding God one day if he eventually built AGI or understanding the nature of reality and the mysteries of the universe is how he talks about it. So those were some of the things that surprised me.
Andrew Hill
You know, one of the things that fascinated me was that they are both pretty socially adept. I know it sounds cliché, but to think of the sort of digitally aware and super brainy introverts who one thinks of as being the people doing the coding and so on, it is a caricature, but they are both good managers and great salespeople. I mean, they are the front people for important organisations and seem to combine those social skills with their obvious braininess.
Parmy Olson
Yeah, absolutely. And I think that is something that makes them more similar than different. I think you can be an introvert and still have a lot of charisma and be very good at, you know, rallying people to a cause. And I would suspect that both of them have some elements of introversion. Demis is like a former chess champion, obsessed with games, has been obsessed with games his whole life. He loves music, very emotive about music. And I think maybe one way that they’re different is Demis has this more kind of scientifically minded side. He did a PhD. Sam is a bit more engineering-led.
So if you look at their two companies, OpenAI is just like a ton of people who used to work at Y Combinator or former start-up founders or former engineers. It’s a much more kind of flat structure, whereas DeepMind is very hierarchical. If you were a scientist or PhD, then you were like a rock star in the company, you could get face time with Demis. It’s quite hard to get face time with him. So I think although they’re very similar, very charismatic, there were some differences as well in how they ran their companies.
Andrew Hill
Right. So when we launched this year’s book award back in the spring, I asked winners of the past 19 awards what they would add to their book if they had the chance to write an extra chapter. And your book only came out in September. But this is such a fast-moving area that I think it’s probably still relevant to ask you, what would you add if you were now doing a second edition?
Parmy Olson
I feel like the main thrust of the book is all still pretty much there. Like, nothing really kind of undermines the premise. So if I was going to add anything, it would just be detail about the story — one being, for example, that OpenAI is increasingly becoming a for-profit organisation. I think we could sort of see that coming in the summer.
Andrew Hill
It feels as though it’s gone further since.
Parmy Olson
But now there’s been a lot more reporting on it. Although the company has not publicly disclosed anything about the direction they want to go in. But there’s been a ton of reporting saying it’s happening. Also, the fact that some of the most promising AI start-ups over the last year, they’d already been acqui-hired by Amazon, Google and Microsoft. And so I feel like that already kind of set the scene for what we’re continuing to see, which is a real struggle for smaller, innovative companies to try and compete with the larger ones. The obvious thing that I would add, of course, is the Trump . . . the new Trump administration, the fact that Elon Musk is in the Trump administration, which I think actually . . . it kind of throws a spanner in the works a little bit.
People have been saying, we’ll probably have a light-touch regulatory regime under Trump. But you have to remember that Musk is an AI doomer. He started OpenAI in part because he was so worried about Google having control of AGI. And he started Neuralink because he wants humans to be augmented with these brain chips so that if AI ever goes rogue, we can get ahead of that. So I don’t think it’s just talk. I think he, from what he said, it seems like he genuinely believes that. So it makes me think that I don’t think we’ll necessarily see a dismantling of regulations around AI under Trump necessarily.
Andrew Hill
Right. He might put a little bit of a cramping the style of a light-touch crowd.
Parmy Olson
Yes. I mean, at the very least, if he repeals Biden’s executive order on AI, he’ll just replace it with something very similar.
Andrew Hill
Right. And there’s a great discussion among judges about this year’s shortlist and just throwing a little bit of light in on the jury room, one constructive criticism of Supremacy was that Musk wasn’t quite prominent enough. And obviously that’s with the benefit of hindsight about what’s happened since you completed the book. But does Musk’s own AI venture xAI stand a good chance with his billions and possibly with some Trump facilitation of posing a threat to Google and Microsoft?
Parmy Olson
I am honestly surprised at how quickly Grok has grown in terms of its ability to raise money.
Andrew Hill
Just quickly explain what Grok is.
Parmy Olson
So, yeah, Grok is a large language model that was developed by Elon Musk’s AI company called xAI. It’s pretty much what ChatGPT is. So Musk has his own version of ChatGPT. It’s integrated into Twitter. I think a lot of us in the press, when Elon Musk took over Twitter and we immediately started seeing advertisers leave, I think maybe we underestimated just how popular he is among certain people in the business community and in Silicon Valley and how he’s still able to just continue raising money. And so it seems like the models that they’re creating, the Grok models, are actually doing quite well in terms of hitting similar benchmarks to those made by the likes of Anthropic, the other leading AI players, and OpenAI.
But yeah, he’s really come along quite well. I just find this paradox of these tech billionaires and founders who are so worried about the potentially imminent catastrophic risk to human existence from AI, feeling that to address it, they must build the most powerful AI they possibly can because there is a sense that they alone can do it safely and do it properly. Because if someone else does it, they’ll build the wrong kind of AI. So I think there’s a real element of hubris in all this as well.
Andrew Hill
It’s like preferring to drive the car rather than be a passenger, even if your driving might itself be dubious for whoever’s the passenger.
Parmy Olson
Yeah. That’s a great analogy.
Andrew Hill
You mentioned in the book that Sam Altman has a sort of bunker, has an actual bunker, I think, doesn’t he? He has somewhere he’s going to go with purified water ready. And the . . .
Parmy Olson
Gas masks, yes.
Andrew Hill
In case things go wrong, which is not wholly reassuring to me.
Parmy Olson
But it’s also not unusual. There are plenty of tech . . . I mean, Peter Thiel also has bunkers in New Zealand and Mark Zuckerberg has a bunker in Hawaii. They’re all kind of buying property in Hawaii and all these kind of islands in quite temperate climates for, I suppose, if ever they need to jet off somewhere to get away from who knows what.
Andrew Hill
Yes. So that leads to, I suppose, the question that you tackle in the book. You know, where are we when it comes to imposing any ethical or regulatory restraints on AI? Is the cat so far out of the bag that it can never be put back in again? Are we just at the point of having to say it’s gone and regulation and ethics will never catch up?
Parmy Olson
So I would say the cat is out of the bag when it comes to how humans are using these models, for better or worse. We talked earlier about the dependency of a new generation of people on these models and what that might do to our critical thinking skills. That’s a very kind of soft and squishy and abstract consequence to think about, hard to measure. But I do feel concerned about it, and I do see that as something that’s probably going to happen over the next five to 10 years.
In terms of how tech companies design their algorithms to make them safe, to make them ethical, I don’t think that’s necessarily out of the bag and sort of uncontrollable. Because once you have regulations that come in and state that these companies need to be audited, they need to be more transparent about the training data that they’ve used for these models, that they need to put in, you know, certain so-called guardrails to make sure that there’s less bias, that there are less security threats, you know, those are things that they can do. They have the money. You know, you have companies like Microsoft and OpenAI are spending billions on data centres, so they can surely put the investment in to designing the algorithms to be more fair. And I say fair because almost all the models, they’re very, very, very good at not saying toxic things. They just, they say all the right things. And I think that’s because these companies really do care about reputation. However, where they fall down is on issues of bias and fairness, on issues like gender, racial stereotyping, and also the security of these models. They’re not actually that secure. So those are two areas where companies really need to improve their standards.
[MUSIC PLAYING]
Andrew Hill
So we were sort of erring on optimism almost there about how the models might be tightened up and they might just be a little bit of restraint there. I mean, what are other reasons? I mean, your book is pretty gloomy about that. I came out thinking I’m a bit, a little bit gloomier than when I went into the book. But what are the reasons for optimism about AI across the whole area of its uses?
Parmy Olson
Yeah, I think there are reasons to be optimistic. I think one reason why maybe there was a bit of a gloomy note is because at the time when the book was written, there was just so much excitement and hype and belief in how this would be positively transformational for people and not enough attention being paid to issues like bias and guardrails. But I think absolutely there are reasons to be optimistic. And I think we’re seeing obviously the laws coming from the European Union. And it’s not just the EU’s AI act. I believe that the Digital Services Act, which addresses social media harms, and also the Digital Markets Act, which is an antitrust law aimed only at the biggest players and addressing anti-competitive behaviour of the largest players, in some ways, those laws could also address some of the issues we might find with AI.
But the AI Act itself also, I think, is quite well designed. It’s not trying to regulate the technology itself. It’s not saying you should build a model this way. It’s about doing risk assessments on the model to make sure it doesn’t have this outcome of potentially harming people or being misused. And it pushes for more transparency of the builders of these models. So I think that is a reason to be optimistic. Unfortunately, the law doesn’t really kick in for another year and a half, two years. So anything could happen between . . .
Andrew Hill
It is a long time in the AI world.
Parmy Olson
It’s been two years since ChatGPT. And look how much, you know, people are already using it. We’ve got 300mn weekly active users of ChatGPT. So it’s a big part of people’s lives already.
Andrew Hill
Yes. I did a webinar as part of a run-up to the book award announcement with Sherry Coutu and others about AI. Sherry Coutu is a tech entrepreneur and angel investor who was also one of our judges this year. And she was pretty adamant that there is a risk here on the other side that one becomes too wary of the use of AI. She cited in particular in education, actually, that she thinks that actually there’s this huge potential to use AI safely and productively in education.
I just wonder where that balance is here, because one can come out of reading some things and think we must clamp down on this and stop it going any further. And at the same time, I certainly personally find I’m using it a lot for things. I can see it supercharging, you know, my future career and ways in which I can use it in my work. Where do you strike the balance?
Parmy Olson
Well, I think it sort of depends on how people continue to use these services and what we observe over the next sort of three to five years. Like how is this affecting the job market? If companies like hedge funds and legal firms are primarily using large language model technology to do entry-level work, which is what they’re doing, what does that mean for graduates who are taking those jobs? What kind of jobs do they have to do? So it’s kind of hard to say whether that’s good or bad. Like, obviously, as you said, it could really kind of accelerate things in your career. The question is, what price could we pay? And I think it was so hard to have predicted that with social media because when social media first came on the scene, it was this incredibly useful utility, a social utility. It was almost like our new infrastructure. People talked about its importance in the Arab Spring among pro-democracy demonstrators. We only really saw the upside.
But it was only after a few years that we could start to see these unintended consequences. And I think a big part of that is the business model behind these services. So right now, large language models are subscription models, but some of them are moving potentially into ad models. OpenAI is exploring an ad model. Perplexity shows ads. And that’s something that concerns me a little bit because one of the reasons that Facebook, for example, became harmful in a lot of ways was because it had an ad model and the company was incentivised to keep people on the site for as long as possible, keep us engaged. Essentially — they would never use this word — keep us addicted. The more we keep checking it, the greater the chance we’ll look at an ad.
And I wonder what happens if a company that, for example, like Character.ai, this is a very successful AI business that does these AI avatars, it’s often used by teenagers to chat to . . . You talk to an AI version of a celebrity or manga character. And kids, I’ve interviewed kids who use this app and they say they’re on it for three to five hours a day sometimes. They’re kind of addicted to it. Now imagine if Character.ai starts showing ads. Then it’s incentivised to continue that. And already Character.ai, I spoke to the founder of the company and the goal is that the AI knows the user, that the so-called context window becomes infinite. And the context window is how much the AI remembers. Right now it’s about 30 minutes. But he’s like, what if they could remember every conversation they ever had about you? They really know you better than anyone else.
I mean, just imagine what happens when people’s relationships with these AIs — and they’re not really relationships because it’s not an actual other person — but what if people grow really attached to them? And if there’s an ad model behind that, then that could end up being a little bit toxic potentially for the user and it becomes harder for the vendor to think about the wellbeing of their user because their business model depends on that person being on their app for as long as possible.
Andrew Hill
And you already cite some examples in the book of some pretty creepy-sounding relationships between humans and the machine where people have become attached, fallen in love actually with the machine. And that’s in the early phase before, as you say, you’ve attached any ads to it.
Parmy Olson
Yes. Yes, that’s right. And I remember speaking to — I won’t name the person — but I spoke to the founder of a start-up that builds these chatbots that can have this kind of emotional connection with human users. And something like 50 per cent or more of the users see the AI as a romantic partner, and millions of people have used the service. And that person and I was talking to them like, what about if apps like yours were optimised for engagement in the same way social media was and that their response was like, that can never happen. That would be a really bad thing if that happened. But who knows? We’re sort of seeing that already with other chatbot makers looking at models.
Andrew Hill
So this is a strange segue, but how do you use AI yourself in your work or indeed in your personal life?
Parmy Olson
I have actually, when it first came out, I kind of tinkered around with like, how good is this thing as a writer, really? And it’s really not for me, personally in terms of my own standard, it’s not good enough at all.
Andrew Hill
I totally agree.
Parmy Olson
It’s kind of . . . I’m underselling it a little bit, but like as a really good thesaurus or for phrases. I use it a lot for just personal things, especially recipes, going places, ideas for things to do with my kids. In a lot of ways it’s replaced Google for me. I think something like 50 per cent of all searches on Google are for something informational and then sort of 20 to 30 per cent are like transactional queries like Nike shoes, you know, under £50 or whatever. And those are the searches that really matter for Google. I don’t think that those are the kinds of searches people are doing on large language models. So in a way, I don’t think they’re a huge threat to Google just yet just because of the types of searches, at least not to their revenue. But I think that will eventually eat away at the model.
Andrew Hill
Even though it underuses AI to be using it for search. I mean, in a strange way, it’s not really meant for search in that respect.
Parmy Olson
Yeah. It’s kind of a classical use of machine-learning. But Google of course, now they have their AI Overviews, which I think is like, I certainly see potential for cannibalising their own business models like . . .
Andrew Hill
Yes. I agree. You don’t go beyond the first paragraph of Gemini.
Parmy Olson
Why would you? And the funny thing is the responses have footnotes and links, but do people click on them?
Andrew Hill
I mean, I suppose the one thing you can guarantee is that Google is probably measuring this stuff. So if it suddenly disappears, we will realise that actually probably it was cannibalising something that was making more money from them. As your book points out, that is the bottom line actually.
Parmy Olson
Yeah, absolutely. I think it’s interesting when ChatGPT came out and was working with Bing, Microsoft’s search engine, and at the time — this was maybe a year and a half ago — a lot of people like, that’s it, Google’s dead. Who would have thought little old Bing is going to just take over everything? But actually, a year and a half later, it’s barely moved the needle in terms of market share for search.
Andrew Hill
I used it for about three weeks.
Parmy Olson
Right.
Andrew Hill
Exactly that. This is going to be much better now. And no it’s not.
Parmy Olson
Yeah. It might have actually helped that Google added AI Overviews, but I don’t think that helped too much. I think the main thing is just how entrenched these services are in our lives. Literally it’s a verb to Google something. So I think that’s, you know, that just speaks to the size of these companies and their dominance. And this is why antitrust regulators are trying to address that now. It’s just they are so deeply baked into our daily lives and infrastructure.
Andrew Hill
Right. So I do use AI to brainstorm ideas. Perhaps the fifth thing that it comes up with is something I haven’t even thought of, as a way of pursuing a story or addressing an issue. But I should reassure listeners that the FT is strictly forbidding its journalists to do any writing through AI. But I did not run this past our various committees. Obviously, I have to ask an AI what to ask you because that’s the point. And I asked it specifically for an unconventional question that it would expect you had not had to answer before. And so this is the question, which is almost an entire podcast in itself and I’m not even sure that I would know how to answer this. ChatGPT asks: if AI were to write a definitive history of humanity 100 years from now, what perspective or biases might it have, and how would this history differ from one written by humans today? It’s quite a good question, actually.
Parmy Olson
It is a good. I mean . . .
Andrew Hill
It’s an almost impossible to answer question because of the level of speculation.
Parmy Olson
I mean, a hundred years from now is a really long time. I think if it was 10 years from now, it would be pretty similar to what a human would write because as I’ve discussed in the book — and you’ve probably talked about this ad nauseam before — so many of the biases that are just out there on the internet are baked into these models. And so, you know, history, there’ll be a great focus on men and white men. And I do wonder a hundred years from now what that would look like if humans really make an effort. I mean, maybe things will have changed so much by then.
You know, we’ll just be living in a slightly more equitable society. And maybe that will be or maybe whatever AI writes in 100 years will be a reflection of that. I do look at the future with optimism. So I would hope that it would actually be quite a, I don’t know, quite an inspiring read, hopefully. I’m not one of these people that looks to the future in this kind of dystopian way. And I’m sorry you came away from the book with a little bit of a, little bit of gloom. But I think that I do think that we work through these things eventually.
Andrew Hill
It seems like a good moment on which to end. Parmy Olson, author of Supremacy, thanks very much for joining us.
Parmy Olson
Thank you.
[MUSIC PLAYING]
Michela Tindera
Thanks for listening to this week’s episode. If you’d like to learn more about Parmy’s book or any of the other shortlisted authors in the 2024 awards, check out the links in our show notes. Behind the Money is hosted by me, Michela Tindera. Saffeya Ahmed is our producer. Sound design and mixing by Joseph Salcedo. Original music is by Hannis Brown. Topher Forhecz is our executive producer. Cheryl Brumley is the global head of audio. Thanks for listening. See you next week.
Comments