Embracing Marketing Mistakes
Welcome to Embracing Marketing Mistakes, the world’s leading irreverent podcast for senior marketers who are tired of the polished corporate b*llshit.
Join Chris Norton and Will Ockenden, founders of the award-winning Prohibition PR, as they sit down with industry leaders to dissect the career-ending f*ck-ups they’d rather forget. The show moves past any pretty vanity metrics to uncover the brutal, honest truths behind marketing disasters, from £30,000 SEO black holes and completely failed companies, to social media crises that went globally viral for all the wrong reasons.
We don't just celebrate the f*ck-ups; we extract the tactical blueprints you need to avoid them yourself. If you are a business owner, or a CMO looking for a competitive advantage that only comes from real-world experience, this is your weekly masterclass in resilience and strategy.
- Listen for: Raw stories from top brands, ex-McKinsey strategists, and industry disruptors.
- Learn from: The errors that cost thousands and the recoveries that saved careers.
- Get ahead by: Turning other people's nasty disasters into your unfair market advantage.
If you have a story to tell and would like to appear on the show, tell us your biggest marketing mistake and drop us a line.
Embracing Marketing Mistakes
EP 106: Ant Cousins Explains Why Handing Empathy to AI Almost Went Wrong
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of Embracing Marketing Mistakes features Ant Cousins unpacking a moment where trusting AI with empathy nearly caused real harm.
Ant Cousins is Vice President of Product at Meltwater and has spent over 25 years working across defence, public safety and AI. He previously advised the UK Ministry of Defence in Iraq and Afghanistan, worked in counterterrorism and strategic communications, and later led AI efforts to combat misinformation.
Broadcasted live, the conversation captures the unfiltered reality of how AI decisions play out under pressure. Ant draws on experience from defence, counterterrorism and modern AI product leadership to explain why empathy is one area where automation still struggles.
From deepfakes and crisis communications to a powerful real-world incident involving emergency response, this live discussion shows both the value of AI and its limits. The key lesson is about responsibility, judgement and knowing when human intervention must stay firmly in place.
This episode is essential listening for senior marketers, comms leaders and anyone using AI in sensitive or high-risk situations.
Is your strategy still right in 2026? Book a free 15-min no obligation discovery call with our host: 👉 [Book your call with Chris now] 👈
Subscribe to our newsletter
👉 Subscribe to our newsletter here. 👈
Follow Chris:
X, TikTok, LinkedIn
Follow Will:
LinkedIn
Follow The Show:
TikTok, YouTube
Welcome And Technical Chaos
Chris NortonHi everybody, welcome to Embracing Marketing Mistakes Live. And this week it's live. Well, uh, and I've got a special guest, Will Ockenden, who's back in the studio. How are you feeling, Stu?
Will OckendenUh yeah, all right, thank you. And I was gonna say it's it's the hottest day of the year so far, and I think we're sweating a little bit in the studio, aren't we? A few technical problems before we started. Terrible technical problems. Here we are, it's all good.
Chris NortonYeah, so we're three minutes over, and we've also got uh an a friend of the show, Mr. Ant Cousins. Welcome back, Ant. How are you feeling?
Ant CousinsNot so bad. Glad to be here and uh yeah, happy that it's somewhere it's sunny and it's about time, right?
Who Ant Is And Why Listen
Chris NortonYeah, it's great to see you. Um, I mean, this show is embracing marketing mistakes, and today can just do one because everything that can have gone wrong has gone wrong. Um, but the technical team behind the scenes are wiping their brows and we're ready to go. So um today's show is going to be all about AI empathy and AI bias, which is a fascinating subject because last time we met and we talked all about AI misinformation. So I've I've got a little bit of an introduction on you that I just want to read out so everybody can hear about who you are. So Ant has built a career in at the intersection of technology and public safety beginning 25 years ago in the Ministry of Defense. He provided vital media advice to British forces in Iraq and Afghanistan to counter Taliban propaganda, um, which led to roles in counter-terrorism for the Home Office. During the Arab Spring, he provided the British government with political analysis and strategic communication advice focusing on how social media was reshaping global narratives. But for the last decade, he has transitioned into leadership roles within the artificial intelligence sector, specifically focusing on how AI can be used to combat misinformation, which is what he came on to talk about last time. He previously served as the CEO of Facmata, which was an AI startup dedicated to identifying harmful online content, which was later acquired by Cision. Today he's the vice president of product at Meltwater, great title, by the way, um, where he oversees a large team of engineers and product specialists developing agentic AI and semantic search solutions for the comms industry. His expertise is recognized at the highest levels of policy and industry, and he's a member of the UK All-Party Parliamentary Group on AI and the chair of the AMEC Tech Hub, which I want to hear all about. Uh Ant has been named for PR Week's AI 25 list of for the last three consecutive years, reflecting his influence in shaping how global brands navigate the complexities of generative AI. And he's a frequent commentator on to major outlets like the BBC, Forbes, and the Times, where he advocates for the responsible and ethical development of large language models. So welcome back to the show. And um, you were meant to be in the studio, so uh let's embrace that marketing mistake right today, yeah.
Ant CousinsThere's a first-off mistake, right? I was like, You're in Leeds, right? I'm like, no, I I thought we were doing this on LinkedIn Live. Do we need to be in the same room? Uh my bad. Uh, I don't like the the things you don't see when these things are being produced, right? Is that the I saw the technical people trying to solve the problem of the audio we were having, and they were getting progressively younger and younger, the more severe the problem got until we had someone who clearly just got out of school and now it's now it's solved.
AI Speeds Up And Deepfakes Mature
Chris NortonThat's that's how it happened. That's how it worked. What happened to that? The the older lot, we all just stare at the technology now and hope it works. That's that's what we were hoping for. So obviously, you were on the show about two years ago now, um, and nothing's changed in AI. Nothing's changed in AI since that since then, has it?
Ant CousinsUh I can't believe that's been two years because uh it well, yeah, it's it's gone by in a flash. Like so much has happened in such a short space of time. Uh, every three months there's an entirely new concept, an entirely new thing to learn. I think maybe on the last time I was on the call, and and it was definitely around that time, I was telling everyone, you know, if you're waiting for it to calm down, it's never going to calm down. You've got to get involved and start learning and be comfortable with being uncomfortable, you know? Um, and it just continued to ramp up. It just continued. That hockeys stick growth has continued. Um, and the uh the bubble hasn't burst yet, but even if it does, it won't stop the adoption of AI in the same way that the dot-com bubble didn't stop our us adopting the internet, right? So, yeah, I I still say the exact same thing I was saying two years ago, which is if you haven't got on board yet, still time, but you know, no timeline at present.
Will OckendenSo, one of the things you spoke about on on the um the show last time was about misinformation and the fact that brands can't be passive when it comes to misinformation, particularly AI-driven misinformation, and they actually need to be super proactive about it. I assume that's still the case. And it has kind of misinformation got worse in the last couple of years as AI's developed.
Ant CousinsYes, um, significantly worse now. Uh and and the and the risks there aren't just for brands, as for ourselves, right? And this is primarily driven by the increasing capability in video and image generation. Uh so deepfakes are now basically commonly available. Two years ago, it was still you still had to either pay a a bit of money and have some technical expertise. So have to stitch together a few different elements to create something that was that was lifelike. And if you're using AI to generate images back then, we had the you know, the sort of um weird things with hands and kind of odd places in the body and and fingers, you know, the Pope thing, right? But the the Pope thing was a good example of where um it still went viral, even though it took a few people like a little while to go, hang on, there's like a few extra fingers there than the than there should be. Um so a few years ago, it was still there were still obvious markers uh of AI generated uh deep fake content. Um but now, no. Now we're at a stage where it's basically indistinguishable, you know. Uh and I think we're at the stage now where um if you look at the the Irish presidential race, right, kind of late last year, where the the leading nominee was a deepfake, not just uh a video of her saying she was withdrawing from the race, um, but a completely AI generated news segment of her saying that, right? They've taken a they've gone a level beyond what we'd seen before. Um, and that for the lot of people actually impacted on on the numbers at the polls. So I think now we're seeing kind of the length of the AI-generated content increase as well as the complexity and intricacy of the generated uh images and people, um, that creates a whole new kind of area of risk for brands and and for ourselves.
Will OckendenIt's mad actually talking about how the technology has developed in the last couple of years. I know when Chris and I used to speak to clients and do um presentations on it, some of the example videos we gave were like, you know, the Gordon Ramsay in a restaurant. And at the time there were quite quite interesting videos, and they're so unlifelike now, aren't they?
Chris NortonYeah, but they look they looked crap, and you could tell they were you could tell they were AI generated. I think now we've got like the the videos of uh Snoop Dogg in the studio with Michael Jackson and then and people that have passed away, like you've got Tupac Shakur in the studio, and it it the the lifelikeness of them is just unbelievable. So the power of it and it's just free to do, and you can somebody can do it within two minutes, you can create a lifelike video, can't you? You don't need to be a Hollywood studio expert anymore. So, what are you guys doing at Meltwater then to how how what what's your role currently now? Because you've changed jobs twice as well, haven't you? So, what do you what do you what are you doing in this area of of misinformation and bias and everything?
Ant CousinsGot it. So, yeah, so um the way I frame it, right, is that there are um the areas of risk for brands uh have increased exponentially. Um, but also the penalties for brands uh for not responding time in a timely fashion in the right way have also increased exponentially to the point where you know even not taking a position on some kind of threat or some kind of viral issue is taking a position, right? And you know, audiences are increasingly penalizing brands for not taking some kind of decision which they feel aligns with themselves. But obviously, brands are now, especially the larger they get, have more and more audiences, and the likelihood that there is an overlap in the beliefs and the values of those individual audiences is increasingly slim. So brands are now walking this unbelievably thin tightrope of what do we do about this issue? What do we say about this issue, right? We're increasingly penalized not by taking by not taking a stand. We have to say something, but what do we say? And arming them with the data to drive that decision making is part of what we're doing at Meltwater, right? Those are data-driven decisions. Like the um the Russian invasion of Ukraine is a good example of this, right? Like a few years ago, um, America, West, Europe, all pretty unanimous and on the same side. Like Russia is the aggressor. Um, you know, of course, we side with Ukraine. Of course, what closed down our operations in Russia at the cost of millions, if not billions, of dollars in some cases, right? But it was an easy decision because everyone was on the same page a few years ago, right? But what if Russia invaded Ukraine today? Would that be as easier decision for those companies, right? Those are multi-million, multi-billion dollar decisions, and audiences are fractured in a way that they weren't just only a few years ago. So you've you've got to arm yourselves with the data to take the decisions, right? Why are we going to make the decision? Which audiences, what do they believe, what are they saying about it, how many of them are there? And are those economically consequential audiences for us? Right? That whole nexus of calculations you need to do in order to kind of arm yourselves on what you say about something, which is a public relations sort of part of the role, right? Not just to kind of to hold a mirror up against the business internally, but to hold the mirror up against them externally. Um, that's data that you need to make that decision, and that's kind of where you know my role sits right now. Uh arming arming uh organizations with early warning about those issues, flagging those issues, uh, showing the data, the velocity, the reach, the metrics, all the stuff that you can't get from just asking Chat GPT, right? The the old school hardcore of our industry.
Will OckendenHow uh sophisticated are companies when it comes to this stuff? So I know when we spoke two years ago, you felt perhaps businesses weren't as proactive as they should be. Is that starting to change, or is there still kind of a lack of awareness to how proactive businesses really need to be?
Ant CousinsUh, big time. I think we've had a lot more inbound um in the last kind of six months. I think the the deepfaked image and video generation is the thing that triggered it. It wasn't necessarily that the risk has increased a bit because of that new vector, um, but it's the thing that's made them recognize the whole area needs work. The whole area needs embedding into the way they make decisions. So we've got a lot more inbound interest now from brands, especially because of the risks, specifically of deepfake CEOs and leadership, right? Brands are petrified that their CEOs can get deepfaked saying, Oh, we're stopping this line of business or investing over here, we're closing down this over there. Um, you know, that that's what really scares them. But it has highlighted that whole area of reputation and risk, which is all intertwined. So yeah, it's it's I think brands are there now.
Will OckendenIt's interesting, isn't it? Even even if you are the victim of a deep fake and it's exposed as being a deep fake, the the damage to a degree is already done, isn't it? You know, if if it's showing your CEO in an unflattering light or something like that, you know, it's um obviously being proactive is is is desirable, but you know, a lot of there's a lot of damage anyway, isn't there? Even if you do detect it.
Reputation Risk And Data Driven Decisions
Ant CousinsYeah, 100%. The the pre-bunking, which we were we've been advertising and kind of promoting pre-bunking as a legitimate tool for brands for for quite a few years, actually, if it's just you know, since misinformation was a thing, we've said, get ahead of some of those issues. But now we've got a new player in town, right? We've got uh the LLMs themselves are a new form of channel and listening you need to conduct, um, which gives you another opportunity actually for pre-bunking. So if if you know you're at risk of certain kind of allegations or certain kind of misinformations and and false narratives in your in your industry, et cetera, there's nothing stopping you making that information available to the to the large language models now so they can see that and and know that and find verifiable third-party claims and evidence of that. You can do that now, which will limit the impact, maybe not on social media as such, but when people go looking for information on ChatGPT or perplexity, et cetera, um, it's less likely those models are going to get fooled by it because go, hey, well, that that seems counter to all this evidence I've already stacked up on this topic, you know. So there's definitely a new, there's a new player in town that gives you new opportunities for pre-banking.
Will OckendenInteresting.
Chris NortonAnd is is perplexity as a big a threat now as to the others as or is is because ChatGPT caught Gemini uh caught Google with its pants down. This is 2022, November 2022. Um, Google springs to life. I'm currently reading The Empire of AI, by the way, which is a fascinating book about the history of of it all. And uh, because I remember when ChatGPT came out, it was a big global viral hit, and they weren't ready. They it was basically like when Twitter came out and they the fail whale kept appearing, if you remember back in the day for all those comms and marketing professionals that used to use that. It was the same sort of situation. Chat GPT's developers weren't ready, they weren't ready for the amount of data centers they'd need, and then Google's suddenly was like, Oh, they got caught out. But uh, the thing that's fascinating to me was perplexity was a different thing. That was a an AI search tool that everybody should be using. Um, and then and now what's where where do you think perplexity sits in the era of misinformation and where it is in the model? Because you don't hear as many people talking about that one as as the others.
Ant CousinsYeah, I think what my so I could answer that question now, and the answer may well be different by the end of this podcast. Um, that's how quickly it's moving. Um, but uh what what I would say is I expect in the longer term, a more reliable answer might be to say that um in the same way that if you remember back in the day when we just had newspapers and broadcast TV, um you'd go out to buy the newspaper, which sort of reinforced your beliefs, right? And you buy that newspaper and that would be kind of your choice. You you choose it in the same way we choose to watch Channel 4 news versus ITV versus PPC. Um and then social media came along. Um, and for a while, things like uh X or Twitter as it was back in the day were sort of that town square moment, where sort of everything was on was on Twitter. And if you had Twitter, you had a pretty good view on the rest of the internet, uh, actually. They had a really good uh kind of model there. But since uh in the same or in the same way that broadcast media um has that political bias, because there's an editor sat there, right, trying to form a narrative that makes sense over time for that channel. That same thing has happened to the social media channels, right? In that um Elon took over Twitter, and now there's a specific kind of bias to some of those stories. Not all of them, there's still other groups on there having their own conversations, but the um moderation policies, what gets amplified, what gets pushed down, right? Those are specific choices that the social media platforms do. And you can start to see that there's like um sort of uh a grouping of audiences on the channels in that I can I make it no starker than the person who goes on X maybe different to the person who goes on Blue Sky. Right. And and there's age differences of that, there's kind of political leaning differences of that. And in the same way, I think it will go the same way with the LLMs in that they're eventually they'll find their markets, and in some cases it'll be um uh language-based or geography geographic based. Um, in some cases it'll be business versus consumer. Um, and in some cases, yeah, political. Uh, but I think they'll find their areas that they excel at and their ability to therefore hone in and focus on their audience will come. Um, and we'll find over time that they'll they'll do that, they'll anchor around specific audiences and therefore around specific values, interests, topics, behaviors, that kind of stuff. Um, but if you want the ranking right now, um anthropic is actually the number one. Um so ChPT created the category, right? And then Perplexity, Google was caught short-foot, and Google took over for a bit. Um, and they've all started to create their own tooling and areas of expertise and things you'd go to them for. Um, but right now anthropic is crushing it. Um, they are yeah, that that that clawing code Claude Cowork specifically being the thing that's kind of um set the world on fire. We we briefly chatted, right, when we talk about the SASICA uh or the SAS Pocalypse, um, created in no small part because of Anthropic's leap ahead and capability um a few years ago a few months ago as a result of a number of different technologies that kind of came came all came together in Claude Co work.
Chris NortonAnd are you using Claude Cowork then in your day job? Because I mean essentially, for those of you that don't use Claude Co-Work, so Claude Code was is is the other product that that that exists. But Claude Co-Work essentially is an AI agent that goes on your machine, uh, you download it, give it access to a folder, and it does your tasks for you until it completes those tasks and then it puts them in a separate folder, shows you the work, and you can approve it, just like as if you had an intern. No, it's no longer just a chat, it's a it's a chat and it's an instruction to somebody to go off and do it. And that that person, so you could essentially go off and play racquetball or paddle and come back. And in essence, if you've briefed it correctly, you can come back and your work is done, correct?
Will OckendenNot a bad idea. That's that's my Friday, so it out.
Chris NortonEvery Friday, isn't it? Paddle. Uh anyway, yeah. So what have you used it and do you use it in your day job?
Pre-Bunking And LLMs As Channels
Ant CousinsI I'm building a prototype right now. Um, so 100%. Like uh there is a new kind of psychosis. Well, like as you see as you go out from the valley, right, which is where these innovations tend to catch on first and they kind of spread out from from Silicon Valley. Um, uh, there's a new kind of psychosis that is being created specifically by, I think, Claude Cowork for the most part, which is the productivity psychosis, which is everyone is now had this subtle feeling of I could be more productive. I could be more productive than I am right now. I could be having this conversation with my cruise mil and Claude Cowork, doing five things at the same time. And this kind of feeling of I I need to be producing, right? Is is creating real challenges for some people. I I had this exact same feeling just a couple of weekends ago. I was like, I have all these ideas of things that I want to pull prototypes for or experiment with. And to answer your question, yes, I'm using it in my day job mostly for prototyping. Uh, for like, what could this idea look like? Just spin it up, have a good. You know what? Don't like it, do something different. So mostly using for prototyping. And I had all these ideas that I was in my weekend and I was like, I really want to be prototyping right now. Um, and it was is a it's a challenge. But the the difference is, and because the way you explained it, I think a lot of people listening, uh, unless they've ex they've actually played with it, might think what you explained sounds kind of like what ChatGPT is. You know, we've been using AI for the last couple of years. How is it different? Um, and it's the combination of things that Claude Cowork brought together at the same time. It's the increasing capability of the model itself, its ability to plan better, its ability to kind of self-challenge and refine its plan, the transparency it gives you on the plan. It's the new concept of skills, right? And that's it, that's a big part of it. Is we've all been playing around with prompts for the last few years. Um, but prompts, and I've seen this in in Meltwater in Mirror in our own kind of conversational interface, that you can spend the time to refine a really good prompt, which gets you the output you want. But it's hard graft to write a really good comprehensive prompt. And if you want to edit that prompt, you're then editing all this block of text to kind of change little variables here, change the name of the brand, or change the take time period. And it's hard work. But skills gives you the ability to effectively, yes, there's a prompt at the core of it, but to split out specifically elements of the tools you wanted to use and the context it needs to refer to and the direction it's got. So it makes it a lot more robust, repeatable. Um, and MCP servers bringing kind of the tools into that mix as well. A bunch of different concepts came together in Claude Co-work that really took over. But the reason why I said I can answer that question now and at the end of the podcast to be different is that when any a new concept really takes on, we saw OpenAI, Microsoft, and others all announce their own versions of skills and their own approach to it. So by the end of this podcast, who knows um what's been this? And this is
Chris Nortonbecause I'm sorry, I was just gonna say because um uh Copilot is launching Claude Co-work on the first of May 2026 in the UK. So if you haven't got it and you've got Microsoft's um infrastructure, you can install it on your machine from the first of May in the UK. So uh yeah, it's because they're crushing it. That's the whole reason. Yeah, so you've come on the show then to talk about well, AI in general. I mean, I could literally talk to you for all day about AI in general, but we we wanted to talk about you've given us a couple of um uh mistakes as well. I mean, you gave us some a great one last time you were on the show. You talked about an issue of when you had to pretend to be David Cameron whilst out in Afghanistan, which was a cracking story.
Will OckendenThat made the that made the top ten mistakes of the time.
Chris NortonI like how the hell did you prepare for that? Well, you didn't prepare for it, did you? You just had to wander out in front of the the world's media. But the today's mistake that you've shared is to do with a British army general and a Bahrain brothel. How's that for a prompt?
Perplexity, Claude, And Model Bias
Ant CousinsAnd has this ever been talked about on LinkedIn before? I was hoping you weren't gonna mention the the B-word uh in the opening. Bahrain like that then because then I could explain because now everyone has this picture in their mind. Um, so yeah, this this is the one of those Middle East hijinks things. Like I and this is when people, if I explain my background to people, like, oh you're James Bond. I was like, no, no, no, Johnny English, much more Johnny English than James Bond. Um, and this is one of those Johnny English moments. So I was in Bahrain, this is during the Arab Spring. Um, so uh in Bahrain, we're meeting with a few people, and I was there with a certain general, lovely fella, um, but uh a pretty uh, if you imagine a cavalry general, like a you know, that picture you have in your head is exactly him. Um he's a lovely guy, uh, a really well spoken, well to do uh British gentleman. Right. And uh and I'm staying with him in a hotel in in Bahrain. Uh and uh we need to go find somewhere for dinner. And he goes, uh Tony, I'm I was Tony back then in the Ministry of Defense because Anne was a bit too casual. And he's like, Tony, I feel like uh Indian fusion tonight. Uh and I was like, Yes, General. Like, what the hell? I'm finding you know an Indian fusion restaurant in Bahrain. Okay, fine. Um, but this is during Arab Spring. So um the uh all of the taxi drivers, which were kind of Shia uh Muslims, are gone, right? And the Sunni Muslims are trying to figure it out for themselves. So all the Shia had gone back to the home countries or just evaporated. Um and so the Sunnis are trying to do a lot of the jobs they didn't necessarily have to do in the past, like drive taxis, or they just imported people to drive for taxis, and obviously these people had no idea where they were going. Um, so I I found what I thought was an Indian fusion restaurant. I had no idea if it was open or not. I couldn't even call them, weren't answering. This is back in the day when I think my internet was costing me like 50 quid a second or something, right, to go and Google things. Um, so uh I found what I thought was the right restaurant. We meet in the lobby um and we get into the into the taxi, the taxi start driving, or speaking to him. He speaks no English, I speak very little Arabic. Um, and so I mean we try and explain to him the the name of the restaurant, we start driving, and you know, Manama in in Bahrain is not a big city, you can drive the whole thing in like you know 45 minutes. We're driving for at least half an hour, and we haven't got to the restaurant yet. I'm like, and the general's getting started starting to get a bit bristly. He's like, Tony, Tony, and like I'm talking to the driver, like driver, where where are you taking us? Like, we should have been there like 15 minutes ago, um, in Escalation, and you could tell the general's about to burst. I was like, just take the nearest restaurant, and he's like, Oh, and I he clearly misunderstood the direction that I gave him. Um, so he pulls up outside this, what looks like a restaurant, um, and I say, hi, General, we're here, pretending like it was the restaurant I intended to take him to all along. Uh, we get out and the restaurant looks dubious, I'll say that much. The whole ground floor is dark. Um, and so we come to the restaurant, but there's a guy there at the door welcoming us in, super happy to see us. Um, we go inside because we I haven't seen many people in you know the last probably few weeks. Um, and there's a staircase going up to the to the top floor of this place, and there's like a drink over the front of the stairway. I was like, oh, this it's got bad news written all over it. But we were very hungry, wanted a drink, General's bristling. I was like, I'll go first, sir, in case you know there's some nastiness wing at the top. Uh so I walk up the stairs, get to the top, and there's a beautiful bar um at the top of this place. I'm like, oh, locked out. And then I look at the clientele, and there's one rotund Arab guy sat at the bar, and easily 15 very scantily clad women, um, all just hanging around the bar. And immediately I'm like, this does not look like a restaurant. I'm pretty sure this is a Brothel. Uh, and so the taxi driver clearly was just like, it's his mate, or you know, who knows, right? So I'm still I'm just waiting for the general to get to the top of the stairs. He gets to the top of the stairs, looks at the scene, and in a in a way a British general, only a British general could stay, he goes, just the one here, I think, Tony. And we just stay for the one, the one gin and tonic. We're very polite, we don't engage at all. Uh, and then and we leave. And as we're leaving the restaurant, we're facing a different direction. And I see our hotel is on the other side of the road. Uh, we're basically driving in circles for about 45 minutes, and then the the the place that we dropped off was which we just walked across the road and ate at a hotel. Um, but that was the time I took the British general to a Bahraini brothel.
Will OckendenThere was no international media, no international media in there, I hope, because you know, on a more serious note, that could uh a logistical hiccup could kind of spiral into a uh scandal, couldn't it?
Ant CousinsThat's an incident, yeah. That's an incident. Um, but uh he was very good about it. He was very polite uh British general. I don't know my favorite favorite British generals all time.
Will OckendenI'm looking forward to the third time you come on the show and hearing what your mistake's gonna be because uh your story escalating. They're escalating in terms of sort of threat. So um one of the things we um one of the kind of the themes of the show today, I know we've covered a lot of area, and that's great, is um empath empathy in AI. Um and I think you might think about AI and think AI hasn't got the ability to express empathy in any kind of credible or authentic way. Is that the case? And and I know you've got a kind of an example of when you've used AI for kind of empathetic communications and it's actually outperformed or you know, been every bit as authentic as a human, hasn't it?
Ant Cousins100%. I think we we do give AI a bad rap when it comes to empathy um uh and kind of emotional awareness. Um, because on the one hand, yes, it's only doing an impression of empathy. It doesn't, it can't actually care, you know, um, it can't actually feel emotion. So it's doing an impression of that based on what it's read, right, on from all the words of the internet. Um so on the one hand, it's only an imitation. Um, but on the other hand, we humans have empathy, doesn't one we always use it, um, right? We have it, right? And this is it's already been proven in research using um uh doctors and their follow-up communications. So doctors short on time, always pressured, right, dealing with horrific circumstances in some cases, can't always give every patient the amount of time and the amount of effort they would like to give the patients, especially when the patient has just logistical follow-up questions about treatments and things like this, right? Um they did an experiment, this is, I think, two years ago now, they did an experiment, and found that um uh patients uh doing that follow-up conversation with AI felt more looked after, felt more cared for um based on the conversations with the AI than they did when they're dealing directly with a human doctor. And you can imagine, all right, because the AI, AI has infinite patience. Um it can be there for days having that conversation. So there's an imitation thing there, but in some cases it doesn't matter. And and the the incident you're referring to was my own personal realization of this was um I was on uh the train going into London probably about a month ago, um, and I got on the train and uh I found a woman uh kind of shaking, pale, um clearly in distress. Um and I and it was weird. Only the night previous I had read about the bystander effect. Um you come across that? Um it's uh um in public situations, you see something like this happen, and everyone thinks, oh, someone else will deal with it. Right. And the more people there are, the more likely you are to think someone else will deal with it. But everybody thinks that. And so the person sometimes goes with a helm further for longer than they should. I saw a random post on this on the Thursday night, right? Just as you do as you're scrolling the kind of doom scrolling on Instagram. Um I saw that. And then Friday morning, I had an opportunity to put it into effect uh because I got on the train, a woman was clean in distress, and I sat down. My first thought was I'm gonna be late for my meeting, right? And my second thought was wait, bystander effect. Maybe I should be the one to go and offer some help. So I sat down with her and and spoke to her and turned out, yeah, she was in some real distress, she was having some real issues. I'm gonna go into the details of the of the medical situation. Um, but she was losing consciousness. Um, and I and the paramedics were an hour and a half away. We're in Finsbury Park, but somehow it was gonna take an hour and a half for the paramedics to get there. And so I was like, if she passes out before they get here, then they're not gonna have to ask all the questions they would normally ask of her history, of other things going on, etc. So I got ChatGBT out and started asking, well, what should I ask? How do I record it in in medical terminology and ask all the questions? Got this little, I was like, format this for the paramedics when they arrived, gave me a short, succinct, amazing, right? Super useful, great, great use of AI. Um, and then the paramedics arrived and she was in and out of consciousness, but she was uh kind of awake when they arrived. Um, and I'm I don't want to speak ill of those particular paramedics, but the lack of empathy they displayed is like, all right, love, what's up? And she was like, and she could barely talk. And I both had their hands in their pockets, um, neither of them even got down to her level, just kind of talking down to her. I was like, this is surely this is basics, you know, uh to engage the patient. It just seemed strange to me. Um and they didn't display really any kind of interest or care the whole way through. It was just like, oh, who's the next patient? Yeah, freaking, you know, get you in the ambulance, run some tests, be fine. Um, and it occurred to me at that point that one, they had the ability to display empathy. And in some cases, maybe their training or the things take over and they're just dealing with the problem, right? They're not caring about the emotional well-being of the patient because they're prioritizing the physical welfare of the patient. Um, and then when we handed off and she went into the ambulance and doing all the thing, um uh I was then left with Chat GPT, kind of bored, uh, because I wanted to make sure she got taken care of nicely, because I wasn't massively uh uh appreciative of the ways the guy guys had handled it. So with ChatGPT, I said, um, here, you know, let's do a wrap-up of what we had. What could I have done better next time, etc.? And in the wrap-up, ChatGPT said to me, Uh, so you did, I think you did as well as you could, you've got her on the floor, you didn't you looked at her, temperature, blah, blah, blah. Um, but how are you doing? Because this could be quite a traumatic incident. And I was like, Oh, no one else asked me that. So I realized it was just imitating and doing what it thought it should do. But in that moment, I felt it like it cared, right? And what does it matter if it's imitating? What matters is how the human feels, right? So I for me, that was the realization that even if it's imitating, sometimes that's good enough.
Will OckendenBecause yeah, you know, you look at community managers for a brand, you know, empathy will like you say, empathy will only work if they care, you know, if if there's a brand offering um health advice or or or giving you know um bereavement counseling or something like that. It only works if the human cares, doesn't it? So I suppose from a brand perspective, um how how much can you scale this? I mean, I I don't want to sound insensitive, but you know, is is it possible for a brand to scale uh empathetic communications and and trust it through AI?
Claude Co-Work And Productivity Pressure
Ant CousinsUm yeah, you're getting it, it does reach a finite point, right? So I would definitely recommend using it in a number of different ways before you start trusting it to do uh empathetic communications at scale. Uh so for example, um you can, if you're doing communications or you're writing communications, key messages, engaging audiences, and producing your content, definitely use AI to check your bias, right? Because you may be empathetic to particular audience that may be completely your blind spot is are are the cares and values of a completely different audience. So because you know that there's no such thing as an unbiased AI in the same way there's no such thing as an unbiased human, those biases are very rarely completely overlap, right? So the combination of the pair is greater uh than the sum of the parts. So making sure you're using AI to test you, to challenge you, to find gaps in your blind spots, to fight to challenge you on your bias, absolutely you should be doing that. It's 100% scalable. Yeah. Um, the other thing, and this um I mentioned to you guys there was another story I wanted to tell. It kind of leads into this, which is the um the more somber story from the from the Middle East, which um is another good example of where I think AI will still struggle. This is really where AI has a limitation, where we still need humans, specifically in PR, right? So um this was 2000 uh I'm gonna say 2008. Um there was uh a soldier killed in Afghanistan. And there is a process that we use in the Ministry of Defense, which I hope doesn't become relevant again as it was back in the day. If you recall when we were losing kind of a lot of soldiers, it was almost daily, right? The the stories of soldier loss in Afghanistan and soldier loss in Iraq. Um and there is a process that we use uh we used in the Ministry of Defense, uh, the fatality notification chain, um, which starts with the um the loss itself and the the explanation of the incident. We collect what happened, why, why did this soldier die? Um and that starts getting pulled together by by the unit and the ops guys in theater on the ground by what they think happened, and they start to form that story. But as soon as that soldier is killed, um we do our best to track down the next of kin for that soldier and inform them. Um and it's obviously the the worst news you could possibly imagine getting is to tell someone that their father or their or their son or their brother or their daughter or wife um has been killed. We try and track down the next of kin and let them know as soon as possible. Um but back in the day, because this was almost sometimes front page news, um, we had an agreement with the media, which is give us 24 hours to track down the family and let them know before we tell the wider world. Because if you announce the death of a soldier immediately and don't and they and and you can't say that the family has been informed, everybody who has a son, a father, a wife, or brother panics. Because they're like, it could be could be my, you know, my relative. So we have this arrangement that the the um as soon as the fatality happens, uh, my job in Afghanistan was to pull together the story, what happened, and also collect the eulogies from from the soldiers that knew him or her, uh, and pulling that together as a package so that whilst we're doing the notif notification, uh, we're pulling together the package so that at the as the 24 hours comes up, we can share. Here's what here was what happened, here's who they are, here's the eulogies. And that's when you see the stories turn up in in the media, is they've got that package and they're ready to go. And they can always say, at that time, and the family has been informed, right? And that makes it okay for everyone else to hear that because they're like, okay, it's not my, not my relative. So that that's the process. Um, I'm not sure many people are aware that that that process goes on. Um, but it's um it's part of the chain. And we take that responsibility very, very seriously. Um, because a gap in that chain or a mistake in that chain has consequences not just for the family, but potentially breaking down that arrangement has consequences for every family that follows. You know? So this is I I took my role in that really, really seriously. Um, and uh when I was in the MOD press office, right, I was another part of that chain, which I was dealing with the news media who are publishing the story. I wasn't collecting the stories in Afghanistan. I was now in the MOD press office, and I'm dealing with the media talking about that story. Um, and this was the mistake. Um, and this is where I think AI, I don't think will ever be able to replace the human aspect of this, specifically when it comes to PR. The mistake that I made for a particular soldier was that I didn't spot a royal connection. So a soldier was killed in Afghanistan, and it turns out he trained with Prince William at Sandhurst. Uh, and there's pictures and stories of him and this soldier together, which immediately elevated the story, right? So now what became, which was unfortunately a relatively routine story, elevated. And all of a sudden, um, we had to take some very serious consequences. There were really serious consequences for the family, uh, which was because I didn't spot that connection, um, I got a call from uh, I'm gonna say it was Rupert Hamer. Do you remember Rupert Hamer? He was a defense correspondent, I think, in Mira. He was actually killed in Afghanistan, I think, in like 20, 2010, 20, 2011. Um, really good guy. He called me um in the press office. Uh uh, this was over the weekend that it broke and said, I've got to let you know, I'm sending stringers to the house. Um, and this is a family who's who've just been told they've lost their, you know, uh a member of their family. He's like, I've got to let you know I'm sending stringers to the house, and so is everyone else. Um, so my recommendation, we get them out. Um, which was like, they make me so angry. Um, that the the the the media would would basically go after family to get the story because it was breaking that arrangement we had. Give us 24 hours um uh to to deal with this. But we had to move the family, had to get them out of their own home at the worst possible time and move them somewhere where they could be safe and protected, whilst the stringers knocked on the door, hey, can we have a statement, please? Hey, can you show us a face, please? Right, the the worst kind of aspect of journalism uh came to fruition. Now, like that for me is a good example. What that was a mistake that that I made. I didn't spot the raw connection and that cost, it cost big for that family. Um, but the the challenge is AI would never have made that connection, right? AI doesn't have a history of all this people, it doesn't necessarily recognize that's gonna elevate this story, right? Maybe once it's happened, you can tell it, oh, by the way, look for a royal connection in the future, and if you spot and blah, blah, blah. But it needs access to all of the context, needs access to all the backstories, histories. You can never be sure what's going to cause a story to escalate or to find. And this this happens both from the negative and on the positive side, right? You can find an angle to get a bit more kind of you know uh jazz out of a story. Um, but AI will never have that sort of nuanced contextual historical awareness uh to kind of recognize that that you've got a greater risk in the story than you think, right? That's still where we need humans. So that that's like probably one of the biggest mistakes I've ever made. Um, and that's where I think there is a limit to to the the use of AI. You still need a human to recognise those those dots, those connections.
Chris NortonI don't I don't envy that situation, having to deal with that either. That sounds pretty horrendous, having to go through that A, the situation, and then B, the mistake of that. Fair play to you for what a job, what a responsible job in terms of Marcoms, really, doing that, communicating that to the press. Um and I and I know what you mean about the the stringers. The stringers is uh yeah, that type of journalism is not that's not the everyone's favourite type of journalism, is it? Because there's brilliant journalists out there and there's there's those as well. But they're all everyone's doing a job. But I I don't think when someone's lost someone in their family to to have them knock on your door, that's not appropriate.
Will OckendenUm I remember years ago I did work experience for a local newspaper in West Sussex, and um they were telling me there's a point to this. They were telling me um death knocking. When you're a junior reporter, you get put on the death knocking duty, and they'd literally go through a list of deaths, somebody that's died in a car accident or something, and you have to go and knock on their door and try and get a story from the parents. So it's it's kind of a well-trodden path. It's uh yeah, it's it's it's the bad side of it all, isn't it?
Chris NortonYeah, it's not it's not ideal because nobody wants that. It's bad enough to deal with anyway, isn't it? You wondered where that story was going then, didn't you? I didn't no, I was more interested in the fact that I've known you for nearly 20 years and I didn't know you were an ex-journalist on a West Sussex newspaper. Where did that come from?
Will OckendenAnother exclusive for you.
Chris NortonUm I knew you were a teddy bear and the head fell off. I thought that was your uh your ex-job.
Will OckendenLots of stories, Chris. Lots of stories. So this is a LinkedIn live. Um, and as you know, we've been getting a few questions in.
The Bahrain Brothel Marketing Mistake
Chris NortonUm, do you want to disbells a question here from Isabel saying, um, is there a point where deploying AI in sensitive comms areas like crisis response, DEI, or social issues actually does more harm than good. So no matter how careful you are, what expectations do the public have around this?
Ant CousinsYeah, um, I I think as a as I was said, like there's if you think about the whole journey or the whole pathway towards the creation of the communications and the distribution of the communications, the optimization and delivery. There's so many parts to what we do in communication, right? So many steps that that feed in. There's plenty of ways I think you can deploy AI to scale the human, um, to do the the low-level analysis, to do the aggregation, to do the sentiment and the narrative and the clustering. There's uh deep fake detection. There's so many ways you can use AI without, I think, um having it take that final trigger pull, right, to to um call it an old term. Uh like the the ability for um AI to cause such reputational damage um if you don't get it right, more so than if a human made the same mistake, right? That's the thing, right? A human can make a mistake and it wouldn't cause anywhere near as much controversy as if AI made that same mistake. Because it's it's a double mistake, right? You've made the mistake, and the bigger mistake you made is you trusted AI to do this thing. So I I think there's plenty of uh of ways for us to deploy AI without putting it in charge of that full, that kind of that last part of the of the puzzle. And and that for me is something I, as I'm designing software, I'm thinking about the the parts in the process that need to be human and and the parts that are human is actually stronger than the AI. And that last decision, do we send this or do we not, is for me always still a human decision. Um, in the same way that deciding not to pull that trigger is a human decision, right? We call it courageous restraint in defense terms, right? The ability to know you can pull a trigger, but deciding not to, despite the pressure to do so. And that's something I find that a lot of people, when they use AI, accidentally get themselves into trouble because they've biased the AI towards action rather than taking it a step back. So they might say to the AI, hey, how do we respond to this issue? And the AI goes, Yep, right, here you go, here's the best line I can think of. Um, but you've biased it towards responding. Whereas you might want to take it a step further, go, do we even respond to this? Because this is tomorrow's chip shop paper, you know, as we used to say in in the press office. Um, so people accidentally bias the AI towards taking action, but deciding not to take action, to say to the CEO, got to take this all on the chin, guys. There's adding a line to this is gonna add more fuel to the fire, just let it go. You know, that creates a strength thing isn't is is a human responsibility.
Chris NortonWhat's annoying about AI as well, with those biases that you've just ex given an example of, is it will go it's still doing the hallucinations and it still makes stuff up because that's what it does. It makes stuff up. The whole thing is a hallucination, is it can just keep it can keep getting getting things wrong, and then you keep spotting them and going, uh, hello, you've got I asked you not to include m dashes, for instance. You're absolutely right, Chris. I I am using m dashes. Can you not use them, please? Sure. Here's a re here's a re-redone version. You go, they're still in there. You're absolutely right. It keeps on going like that. And it it's just very, very strange. You shouldn't rely entirely on AI. I actually used AI, um, it's in an episode that's coming out in a few weeks for some research on a guest. And Will found this hilarious because I thought it was a really good line. Uh introduced, it'll it'll come out as one of my mistakes of the year. But I'd done extensive research. I'd used AI to do the research. And um this particular guest who shall remain nameless until they come on the pod, um, was in a band. And the band, I can't remember the name of it, but it started with B and I or something, the letters. Um, I can't remember what the name of the band was. Anyway, I I said um I gave big big up on his profile, what he was doing, and then I said, So what was your first hit? And he was like, For for so and so. And he just looked at me like, what? And he was he turned out he was a drummer, he was in a band, but it wasn't the band that the AI had pulled out, and it was all just completely made up. And I was like, God, that is that is how frightening, and it totally leads you to thinking that it's true, and it's just not.
Will OckendenIt was a great name for a band. It came up with a really good name for a band.
Chris NortonYeah, he said it was better than the name we had for our band, but it's not the name of our band, it's just guessed that it was awful. I'll I'll give you a there's misinformation for you right there.
Ant CousinsI'll I'll give you like a cyber uh like a simple way of thinking about this. Um, is uh we're all faced with tasks every day, and we're all faced, therefore, with two tasks, which is for this task, should I do it myself or should I use AI? Right? That that's something we have to ask ourselves about almost everything that we do. And there's a very simple logic I use to think about how and when to use AI with any any task. First is do I enjoy doing this task or not? And the second is if I enjoy doing the task or not, um, how much of that is is is my skill set, right? Am I good at it or am I bad at it? If you enjoy doing the task and you're good at it, do it. That's the thing that gets you up in the morning. That's your secret source, that's your competitive differentiation. You're better than the AI is at it. That's probably the thing you're gonna excel at and differentiate on. If you enjoy doing it, but you don't think you're as good as you could possibly be at it, then you do it and get the AI to challenge you, to coach you, um, to make you better at it, to poke holes in what you've done. Right. So if you enjoy it and you and you and you're good at it, do it. If you enjoy it and you're not as good as you can be, coaching. Right. And that that area, that coaching is probably a great area to use AI. Now, on the other side, if you don't enjoy doing the task, but you are good at it, then fine, use the AI to create the content or do the first draft and you just QA it, right? Because you know what good looks like, but you don't want to spend your time doing it. And that probably is the biggest area of opportunity for most businesses. Things you're good at, but you don't want to spend your time on, right? And then danger zone, this is what you're talking about. I don't enjoy doing it. I ain't good at it either. Don't use AI to just draft it and send it out to the world, right? By all means use AI to draft it, but get someone else who knows what they're talking about to QA it. Um, right, because that's the area where you're not good enough to know whether what you've created is good or bad, right? And that's that's the logic. That's it. That's that simple.
Will OckendenLove it. That's a good little model. I like that. We'll have to recreate that in some sort of graphics. We've got a second question here. That's it. There we go. We'll we'll we'll plug your LinkedIn for you.
Chris NortonYeah, we'll plug that in a minute. Um, do you do you think audiences are becoming more skeptical of brand empathy in general because they assume AI is involved? And if so, how can brands rebuild trust around that?
Will OckendenGood question.
Chris NortonYeah, that is a good one.
AI Empathy On A London Train
Ant CousinsReally good question. Uh you always know it's a really good question when you don't have an answer off the top of your head. Um, so at least it's not live, man. Yeah. Um, so moments later. So this is the challenge, right? That um previously when you had uh a small number of audiences, you could you could figure out your brand values, your your anchor, right, and create your messaging based on a combination of that with your products, your claims. And it's relatively simple to create your messaging, right? Is kind of that that relatively small process because you know the the audiences you're trying to target. And then audiences started to segment and then they started to fragment, right? And then they've created polar opposite values and beliefs and opinions amongst themselves. Now you're you're basically walking into a machine gun fire. Um, and you know you're gonna lose some of those audiences. There's only so far you can stretch your messaging for it to make sense to those audiences and not seem inauthentic. And whilst AI, I think, can help you identify the biases and the values and how some of those audiences might react, right? We've got synthetic audiences, ability to test your messaging. You can do that, but if you stretch it too far, yeah, you're gonna end up being inauthentic, inauthentic to your brand, to your values, to your claims. So the the further you go, and this is where it gets risky, just you letting AI control your messaging, is you lose the core. You lose the anchor of what is authentic to your brand and your products. So um I don't have an answer other than yes, it's a risk. Don't go too far. And I and I bring it back to making sure that you've understood which audiences really matter, which audiences are the ones that are actually spending. Nike is it is this is my go-to example of this. Was um Nike, uh many years ago now, uh did um a campaign with Colin Kaepernick. Remember the the American football player? Yeah, uh he took an took a knee during the national ad right. So calls for cancellation, people not buying tickets for the stadium anymore, he should never play football again, blah, blah, blah. Um, not exactly a poster child for a national ad campaign, but Nike stepped up and said, hold my butt light uh and did a national ad campaign on that guy about what he was saying, because they realized the people calling for the cancellation were not the people burning through three pairs of Nike sneakers in a year, right? That wasn't their audience. Yeah, right. So they realized this is the audience that matters to us and they care. And I think the more brands recognize like the vocal minority and start to distance themselves and not worry so much that people are gonna complain. But you know what? They're not the ones buying our products, they're not the ones with a voice that we care about. Um, that for me is is the key to avoiding that risk.
Will OckendenAnd can um, I mean, this may sound like a stupid question. Uh, you know, if we look back to the days of the Gordon Ramsay AI video, it's so obvious it's AI. It's harder now with video, isn't it? But with kind of um written communications, can audiences still tell the difference between AI-generated content and a human sat behind a keyboard or is it impossible now?
Ant CousinsI I I think they can. Um, I think with each model, we introduce a new set of markers. Um, the M-dash is a good example. It's really hard to get rid of, right, in an older version of ChatGPT. If you're on the most recent model, ChatGPT, it's easier to don't use M-dash. Uh in fact, Sam Altman came out and said, if you tell it not to use an M-dash now, it won't actually use an M-dash. So, but there are these markers, and more so than the markers, because um AI is trained on the vast majority of the internet, it all kind of goes towards the mean. You know, I think there are subconscious markers in content written by AI, more so than the individual tells, right? The it's not this, it's not this, it's this, right? It's not just that, it's this, right? The those kind of markers which you still see in subconscious, you immediately pick up on. It's more than that. There is, I think, a uh a rhythm uh and a tone that if you don't give it that instruction, you can tell, I think. Um so I if you're if you're if your communications really matters, I would still get a human to do the first draft and get AI to challenge. Um unless it's low risk, low rent stuff, fine. You know, use AI and just tweak her. But if it if you if your brand rides on it, I'd still use a human.
Chris NortonYeah, definitely. Because it's better to analyze what you've done and then coach it like you just said that you've you've written something from scratch, or you can give it you can give you a structure and then you can write it and then you ask it to assess it, give you, give you a few pointers of where you you could improve the narrative or whatever. But yeah, I mean, there was a there was an email just before I've just come back from Morocco from seven days away, check me out, but um, before I left, I got an email from a not to be mentioned big brand. Uh it was one of the marketing titles in the UK, the biggest, I would say. And I got an email on the Friday before I left, and it said, in today's huge digital landscape, that's how it started. And I was just like, AI, M dashes all the way through it, and it was just and it was a one of the top marketing publications. I'm just like, even they can't be bothered to write their emails without using artificial intelligence from scratch. It was written from scratch by AI. I felt it as much.
Will OckendenYeah, that was so obvious, wasn't it? And I think people will naturally um reject that kind of just like you did. You know, it's not it's not good content, is it? It's and and the same with social media posts or online content. If people can tell it's AI, you know, there must be some stats around your perception of a brand if they're pushing out AI content, masquerading as authentic content.
Ant CousinsYeah, you remember like one of the most basic rules in PR, the medium is the message, right? And it's it's not about um the uh perception of the of the channel or or the content, it's the perceived value and effort required. And if you've used AI to generate the content and knowing how easy it is to do that, the perception is you care less about their content than you do if you spend a human on it, right? That's the medium is that's the the the meaning behind the meaning is the meet the medium is the message. So yeah, if you if you can tell it's AI generated, you're probably not gonna be paying as much attention as if you can tell it's human generated.
Chris NortonI I know you I want we've got like five minutes, well, three minutes, because we started late, but I uh I hope you're good for time. But I just wanted to ask you about a couple of things. First, that well, I've got a a message here from um somebody that's asking a question about one of Twitter's founders, Jack Dorsey, has just cut 40% of its workforce block. You've probably seen this story in the media, and he blamed it on AI, you know, the AI integration. Um and I wondered what you thought of that based on what you were talking about on the this SaaS apocalypse, because that obviously I know you work for a software as a service business. And how is AI coming for SaaS? Because it it it seems to be flipping the entire sector, which is where it's come from, technology, on its head, in the fact that we all we've all paid for seats, we've all paid for memberships, and now we don't need 10 seats to have, or maybe we do, or maybe they're gonna have to find a f an entire new pricing model for all this software as a service. If you can use one license to access all the data, and you don't need to go into that data anymore. What what's your view on all of this that's going on right now? Because it it feels like there's a real disruption going on there.
When Human Context Still Wins
Ant CousinsYeah, uh massively. And there's a there's a few things, and I think it's hitting, it's not hitting everyone in SaaS equally. So I would say for our particular industry, and when I say our industry, I mean media monitoring, social media listening, that kind of you know, the the guys who go out, create the contracts with the beach providers, get the content in, create the contracts with the social platforms, get that data in, and then enrich it at scale, right? We've got nearly two billion items coming into to melt water per day, every blog, every forum, every blah, blah, blah. Um, and each one of those items is enriched with things that most consumers aren't interested in, right? Sentiment and reach and engagement metrics and entities and all this kind of stuff that if you're doing insights or analysis, you need, but most consumers don't care about. Um, that insulates us in this particular industry somewhat from the SaaSpocalypse because you can't just go to ChatGPT or or Claude and get the kind of insights or analysis you can get from us because they're not doing that, right? They're not enriching all those topics. They're not for firstly, they don't have comprehensive coverage, right? You can't go to Chat GPT and get X data because, well, there's no love lost between Salman and Elon Musk for one, right? But in the same way that Grok will never have Facebook. And so you need an agnostic provider for one, which immediately gives us a differentiation between the the LLM providers. So there's there's that. Um, there's the uh enrichments that we do, uh, or that kind of every single time we get a new post, we do sentiment and it's industry standard sentiment. If you ask us what the sentiment of a post is today, we'll give you the same answer today that we would give you three months ago, six months ago, a year ago, which means that that that really challenging like historical comparison over time makes sense in a way that going to a chat GPT and asking the same question never would give you a reliable answer. So we have some uh like um uh protection or or or motes, if you like, protective motes against those guys. And the other one is we've built proprietary data sets like journalist databases and influencer databases, uh, which they haven't, and we have access to that too. So it is hitting some SaaS providers in a way that it's not hitting our industry, but it still changes our industry in a way. Um, because uh where it does hit our industry, I think, is that um people are um starting to use APIs a lot more than they used to be, right? An API used to be a hardcore technical thing. You need engineers on both sides of the divide to build integration between two systems, um, and that immediately ruled it out for a lot of companies because they had to have the engineers to do the thing. Um, but then skills comes along with MCP servers, and those servers have tools, and now you just print, hey, can you go to this application over here and get me this data and bring it back and reformat it and show it to me like this? Yes, absolutely, agents are capable of doing that. So MCP servers and skills with Claude Cowork open up integrations in a way that weren't possible before. So what we're seeing at Meltwater is a lot of companies coming to us with, we've got this specific use case over here and we want your data for it. Um, we've got this specific use case over here and we want your data for it. And so actually, we're seeing growth in some ways that other companies are not. And this is, I think, a good example of Jevon's paradox, um, which if you haven't come across, right? Jevon's paradox is how you explain that um back in the day when Excel first came out, people say, Oh, there's gonna be no more accountants because we've got Excel. And then Excel came out and we have more accountants than ever. Two years ago, Jeff Hinton, godfather of AI, said AI is now capable of doing what a radiologist it does. So we don't need any more radiologists. We now have more radiologists than we've ever had before. Jevon's paradox the easier and quicker and cheaper it is to do something, the more of it you do, not less. And overall, the volume is is greater. Um, so to get back to your Jack Dorsey point, what we're seeing is growth because things are getting cheaper, cheaper and easier to do. We're seeing growth in in demand. But to get back to Jack Dorsey's point, um, whilst I think some companies are taking, and I this is my advice for all companies, like don't ask yourselves, how do we do sort of the same as before with less people? Ask yourself, how do we do much more with the same amount of people? That those are the companies that won in the dot-com boom, right? The ones that are like, how do we use this to accelerate and to grow, to expand our pipeline, to reach out to more customers, to serve more customers, to make more decisions more quickly. The people who have an abundance mindset with this technology will win over those people applying a scarcity mindset. So I would say for all the people listening, if you're facing pressure from your CFO, from a CEO, from your board, hey, how are you cutting costs with AI? Ask them, no, how are we growing our top line with AI is what I would say in response to that. So um that's my optimistic recommendation. The reality is the CFOs, the CEOs, the boards are making decisions that you can't undo, right? They go, oh, I'm sure AI can do this. Do we need so many people? No, get rid of half of them, taking short-sighted approaches to those decisions. Um, what that does mean though is in an organization where you needed at least five people in your PR or comms team, right? And now you can get away with four, uh, that means that for four of those companies, there's a whole nother company now that can achieve the same amount of output as that company that proves to have five. Jevon's paradox. So whilst you might see teams within companies or companies being smaller overall, we will see more companies, right? We've just seen the first billion-dollar entrepreneur with like with like one guy and his brother, sort of billion-dollar company. So what used to be something only a large amount of people could achieve, we'll see that with a smaller number of people. That means more companies. So overall, I'm expecting jobs to stay about the same, albeit the jobs will change and the number of companies will increase, also the number of roles in each individual team may shrink.
How To Reach Ant And Close
Will OckendenLove it. What a positive end to the show. That's uh yeah, that's quite at odds with um a lot of the other kind of narratives in the media about AI. So yeah, that was fantastic.
Chris NortonYeah, thanks for thanks for coming on the show again. And how how can people get hold of you if they want to get hold of you?
Ant CousinsUh, LinkedIn, probably easiest. Um uh you can reach reach out to me there. Um uh yeah, I don't have a specialist help desk or anything at Bellwater, so probably LinkedIn is is best.
Will OckendenBrilliant. Well, um that went well, didn't it? Apologies if you had an ice cream van in the background. Um parking directly outside our office and playing its music. But uh yeah, thank you for joining us on the LinkedIn Live. Thank you very much, Ann. Uh fascinating as ever.
Chris NortonYeah, thanks a lot. Thanks a lot for joining us. If you um will, but this this episode is coming out now because it's live and we don't need to do anything with it, do we? We don't even need to wrap it up. This it will be slightly different because obviously it was live, but we want to thank everybody that's been listening, either not live afterwards, and that have been on the LinkedIn live. So thanks for joining us, and we'll see you next week.