Embracing Marketing Mistakes

Your Marketing Team Just Got Smarter Thanks to AI

Prohibition PR

What topic would you like us to cover next?

What happens when you mix two decades of digital comms experience with a brain wired for analytics, SEO and AI? You get friend of the show Andrew Bruce Smith. 

He’s the founder of Escherman, a CIPR Fellow, and Chair of the AI in PR panel, not to mention a certified Google Partner who’s trained over 3,000 organisations. From global brands to government departments, he’s helped them all wrap their heads around data, strategy and the tech shaping modern PR.

We dive deep into the rapidly evolving world of AI for marketers with expert Andrew Bruce Smith, exploring how reasoning models, research capabilities, and AI avatars are transforming the marketing landscape at breathtaking speed.

• Reasoning models like ChatGPT-4o spend more time thinking through complex problems, delivering better quality responses for marketing plans and strategy
 • Deep research functionality allows marketers to generate comprehensive market analyses in minutes that previously took weeks and cost thousands
 • Understanding when to use different AI models is crucial, reasoning models for complex tasks, standard models for simpler requests
 • AI avatars through tools like HeyGen and Syntesia can create promotional videos and may soon represent you in meetings
 • The rise of agentic AI allows for autonomous systems that can execute complex workflows with minimal human intervention
 • Marketers need to rethink where they add value as AI handles more tasks, potentially moving from time-based to value-based billing
 • AI isn't replacing jobs but tasks, freeing humans to focus on strategic thinking and creativity

The best place to find Andrew is on LinkedIn (there's only one Andrew Bruce Smith) or at his website escherman.com.

Is your marketing strategy ready for 2025? Book a free 15-min discovery call with Chris to get tailored insights to boost your brand’s growth.

👉 [Book your call with Chris now] 👈

Subscribe to our Newsletter
✒️Don't miss a hilarious fail or event by 👉 subscribing to our newsletter here. 👈 Each week we document what we are doing in our business, we share new things we've discovered, mistakes we've made, and tons of valuable marketing tips!

Follow Chris Norton:
X, TikTok, LinkedIn

Follow Will Ockenden:
LinkedIn

Follow The Show:
TikTok, YouTube

Chris Norton:

Hello and welcome back to Embracing Marketing Mistakes, the podcast where we unpack the mistakes from the world's top marketers and the lessons that they learn. I'm your host, chris Norton, and today I'm joined once again by Andrew Bruce Smith, an expert in AI and how it applies to PR and communications. We look at the fast-moving world of AI and how reasoning models are transforming marketing, from AI avatars to the ethical implications. We cover it all and you won't want to miss it. So, as always, sit back, relax and let's get into the wild, crazy world of artificial intelligence. Enjoy Andrew Bruce Smith. Friend of the show. And how many appearances is this now? Three.

Andrew Bruce:

I believe this is the third time you've you've had me, but yes you've beaten stewart bruce, he'll be fuming.

Chris Norton:

Thanks for having me back, okay. So I mean you're, you're the um ai comms expert, um, and so much. I mean this is what's great about this subject matter. Um, so much has changed, and even in the last 12 months, hasn't it? I mean it's just mind-boggling how much has changed. So what do you think has been the biggest changes we've had in AI practically that have made a difference, other than all this ethics stuff that everybody's talking about at the Fry Senter, I'm looking at the practical things that we think has changed and improved in the last 12 months.

Andrew Bruce:

Well, I guess we talk about the last 12 months. It feels like what's happened the last 12 minutes. I used to joke last year when I was running sort of training sessions that things I would say in the morning wouldn't be true by the end of the session, at the end of the day. And this year it's happened three times where I said with great confidence that oh no, it can't do that. And then literally by the end of the day it's like oh well, now it can. So that, I think, is an indication of just how ridiculously fast things are moving on.

Andrew Bruce:

To kind of simply kind of categorize, what do I think are the big developments over the last? Well, 12 months, or even, I think, are the big developments over the last 12 months, or even, I guess, 6 months, really? Well, first of all, the rise of the reasoning model. We learned over the last couple of years that when you ask language models you give them more time to think, they tend to come back with a better quality response. So now having access to AI models that have that kind of capability built into them. So obviously, chatgpt 01 was the first flavor of that that came out last September. I mean now they're 10 a penny.

Andrew Bruce:

Everyone's got a reasoning model Google Gemini's reasoning, claude's got reasoning. Chatgpt's now got five different flavors of reasoning. It's got 03, pro 03, 04, whatever. So that in the sense, from a practical standpoint I mean there are so many kind of use cases in marketing, comms and pr um, yeah, having a model that is capable of spending more time thinking through its options, considering options, leaving options out in the case of chat t, it's also got some agentic capabilities, agentic AI buzz phrase of AI this 2025, where it can decide to use certain tools to kind of execute the task, you know, using a reasoning model. Reasoning models are better at being given something kind of complex to think about, so that usually is something like help me develop a really robust plan. So it actually goes off and spends time thinking about okay, if that's the clients, that's the prospects and those are the outcomes they're looking for and oh, that's the relatively small budget they've given you, square that circle for me, or at least apply a kind of highly thought-through process to it.

Will Ockenden:

That gives me a starting point I'll be able to see the thought process From a user perspective, then reasoning models mean better results and more complex queries are solved. Is that fair?

Andrew Bruce:

I think the nuance and this comes up all the time is you've now got to think well, what am I asking it to do? And is a reason model the best flavor to use, or should I stick with a vanilla helpful assistant. So you know, gpt40 is a helpful assistant, ie you ask it to do something and two seconds later there's its response. But it's a bit like the human, you know, if I ask you for your, your immediate need, your response to something, you'll give it to me. But but that's not necessarily the right. Maybe, if it's a complicated thing, you should probably take a bit more time to think about it.

Andrew Bruce:

At the same time, giving a reasoning model a trivial task well, that's pretty pointless because it'll take like half an hour to do it. It's like it should take five seconds. Well, don't give that to a reasoning model or it'll overthink it. It literally will overthink its response to a relatively trivial thing. So, getting that balance between when do I stick it's the trade-offs of is this something that I need, like now, and and that that particular flavor of model is going to do the job, versus I'm happy to wait 30 minutes if it means it goes off, thinks about it properly and comes back to me with a much better response after 30 minutes than what I would have got after 30 seconds yeah, because I would say what it is.

Chris Norton:

So we're essentially talking about the deep research functionality that's that's been launched on numerous platforms. Is that fair?

Andrew Bruce:

uh, not quite kind of not quite um, not quite well, that'll be my next sort of point to make. So deep research is a kind of a simple flavor of of agentic ai. It's where you give your deep research tool which again they're 10 a penny now um, you give it a research task, I don't know. Let's say you've got a, a new business briefing, so naturally you'd want to I assume research the prospect's marketplace or what have they done regarding marketing, whatever? So it doesn't matter which research tool you're using, whether it's ChatGPT, google, gemini, grok, whatever it will always come back and try and clarify and understand what exactly you're after here. It doesn't just go off and do it. It says look, can I just clarify what are you after here? You tell it and then off it goes. But it's got a degree of autonomy of deciding. Well, where does it go? What does it look at? What information sources will it consider? In terms of the response, it goes back to you and quite often it will consult a bunch of stuff and go no, I'm not going to use that. So again, there's a whole spectrum Perplexity.

Andrew Bruce:

Ai in the free version gives you five deep research requests a day for nothing. Great, free Hurrah. But it's not as detailed as chat, gpt research, which would go off and maybe spend sometimes 40, 50, maybe an hour, you know, really kind of like trying to do a thorough job of finding every single kind of source, digging deeply and then giving you back. Typically you know quite often some 55-page dossier, but all the sources are fully cited so you can check yourself to go, yeah, that looks, that looks good to me. Or, hang on a minute, I don't like those, those look pretty dodge. So do it again this time. Leave that out. So you're you're. It's all about the clarity that you have about exactly what, what you want the tool to go off and do and you could have briefing it, uh, and off it goes yeah, because I would say that any marketer now can you.

Chris Norton:

You can basically commission your own dissertation into whatever it is you're trying to do like, and that I don't think people realize that no, I'll give you one example.

Andrew Bruce:

I won't mention the brand, but let's say well, I was. I was having a conversation with the, the uh, the marketing, european marketing director of a of a huge, huge drinks brand. While I was having the conversation, I thought I'm going to set my deep research agent off to research this brand's kind of approach to marketing over the last 15 years. And before the call finishes, oh, hang on, um, I've got this for you and I emailed it. I said to him he's like good grief, this is brilliant. He said where'd you get this from? I said I literally commissioned it whilst we've been talking. Um, it was a kind of 45 page detailed document of and he's like this, he said. He said my mind is blown. He said this is the sort of thing I would have.

Andrew Bruce:

You know, when we got an ad agency, a marketing agency commissioned them to kind of do this and it would have taken weeks and cost me tens of thousands of pounds. You're telling me that you literally pushed a button on. I think I used it was Google Gemini actually. Yes, because that's why it was chat GPT. I would have to send it to him after the call because it would take a bit long to do it, but this is Gemini and he was bowled over. He was just like this is crazy.

Chris Norton:

And then you could have just used it for that, you could say, and then you could ask it to look at that PDF and then compare its direct competitors and pick holes in the market and what opportunities are in the market.

Andrew Bruce:

When you consider the workflow you can put together, the brief comes in. Maybe you get the reasoning model to assess the brief you've got, using deep research tools to research the brand, the competition, the marketplace, previous approaches to marketing, messaging, all of that. That that becomes then context for you to then respond to the brief. But I keep reminding people, it's not about pushing a button and it it does it all for you. It is much more about how do we get to the point in the process that previously would have taken us, I don't know, a week or two. I actually get to here in maybe half an hour and now we've got a lot more time for us to bring our own kind of human experience and expertise to the party.

Andrew Bruce:

But let's be honest, I think everyone's moaned about forever. It's like, oh, if only we had more time. You know we would have more creative brainstorms, we would spend more time, you know thinking strategically and whatever. Well, hey, that opportunity now is is definitely kind of coming to the fore if you are utilizing the technology that is available today in the right way I think you know when we first had you on the show.

Will Ockenden:

When was that? Two years ago, you know 18 months, two years ago, I think anecdotally. A lot of the kind of conversation, certainly amongst agencies, around the use of ai is that you know, often it would be inaccurate, there'd be examples of hallucination. If you get it to come up with copy or something, it's obviously ai. How's that? How's that changing and how's that changed? You know where we, where we at now.

Andrew Bruce:

Well, see it it was true even then that that was largely down to the fact that just people weren't using the tools properly. Well, again, if you consider that until last year, reasoning models didn't exist, but helpful assistants designed and built to give the most likely response, the average, the mean, of course, a lot of the time in comms. You don't want the average, you want above average, you want difference, you want creativity. So that's partly just people not realizing what you have. To ask and brief it. Or be aware that there are certain tools where there's a thing called a temperature control. So the concept of temperature with ais are.

Andrew Bruce:

It's where you give permission to the model to consider less likely tokens in the sequence. Or, as I describe it, it's how many drinks do you give the model before you ask it to respond? And again, just like a human being, if you give it too many drinks you literally get back literal nonsense. But if you set it about the right, it says hey look, I'm telling you you don't have to give me the most likely response, I'm giving you permission to go off-piste. I'm giving you permission. Is that through a text?

Will Ockenden:

prompt Settings. No, no, it's a setting.

Andrew Bruce:

There are certain tools. So if you use any of the API consoles in the platform so JackGPT's API platform or Google AI Studio my personal favorites If you go to AI Studio on the right-hand side there it is there's a temperature control. So if you slide it to zero, you're saying, look, I want you to give me. I don't want you making stuff up, I don't want you going off piece. Stick to the facts, go the other way. If, well, in this case, I do want, I want creativity. I don't want the norm, I don't want anybody else is getting, uh, slide it up towards, towards two. If you put it to two, it is literal nonsense. So it's like I've had 15 cans of stella and I've gone off to I'm off. Yeah, I, I've gone gaga. Um. So I think back then it was largely just people didn't know and I think that's still quite true actually that this is a summation Well, we're giving you all this amazing AI technology. It's obvious, isn't it? Get on with it. And it's like, well, it isn't.

Chris Norton:

I was sharing in the team the other day, so we've got an AI committee here that we'll share best practice and things.

Andrew Bruce:

Some of the videos coming out of google, with ai generated video now, is incredible. It's nondescript. You can't tell the difference. Well, it's vo3 vo3, which came out a couple of weeks ago. Well, a the fact that it generates video and audio at the same time, I mean literally, just you could just give it a prompt and say you know, I want this scene of two people with and here's the dialogue give it to me, and it does. I mean, yeah, I mean some of the VO3 stuff that's on YouTube, just go. That is just mind bending when you consider how far that's advanced in literally a year or so. And it's the cliche, it's the worst they're going to be. It's only going to get better. What are you doing?

Chris Norton:

currently, then with with clients. So you are, you are you running ai workshops in general? Are you are you like helping them to break down the processes and use the various models at different stages of the process?

Andrew Bruce:

so all of the above, um, it's almost like being a kind of quasi-management consultant where, yes, you can go in and say, hey, in-house comms team or agency team, here's a whole sort of panoply of tools and these different use cases and this tool is really very good for doing this and that might be enough. People go, wow, brilliant, I can do it all myself now. Or actually, can you help us to actually, you know, implement this, particularly in-house teams? One of the interesting things I've discovered, of course, is that you almost always have to and should involve the IT department. So in certainly bigger organizations, what you find is that AI is seen as a technology decision. So IT is told right, you figure out what tools we're going to buy for the marketing department or the comms team and, with respect, it departments themselves don't have AI expertise either. They're learning as much as anybody else. So a common reaction is to go, well, we'll just say no until we kind of figure out the right way to the right tools. And again, different departments don't realize often that it's got all these responsibilities around data governance, you know, gdpr compliance, etc.

Andrew Bruce:

Um, my situation a couple of days ago with a in-house really a large in-house marketing team, and very rightly, because I suggested it. We had the, the uh, the head of it sitting on the call um because he could actually answer questions. People had, uh, people say, oh, you know, co-pilot, why have only a few of us got got co-pilot, why can't we all have it? And I said, well, I have a, I have a suspicion. It's because do you know what? What it's going to cost. So the IT director says, yeah, okay, and he kind of said, yeah, if we give Microsoft 365 Copilot to literally everybody in the marketing department, you know we're looking at a couple of hundred grand. It's like what he said if you want that to come out of your budget, go ahead. Yeah, exactly, but if it's coming out of IT, it is not it's not cheap.

Andrew Bruce:

I don't care what anybody says. Well, just for 365, that extra $30, £30 per user per month, that soon adds up and you have to pay up front as well.

Chris Norton:

You have to pay for the year up front, all in one go.

Andrew Bruce:

I think they've modified it's you don't have to pay up front but you have to commit to an annual contract. So they're saying, oh, you pay monthly, but you're still going to end up paying, paying at least 360 dollars or 360 pounds per person on top of your existing uh, you know 365. So cost-wise, you can see why people aren't just universally kind of letting people have it. On the other hand, you can't deny, given so many shops are Microsoft shops, from a kind of safety standpoint, it does give people confidence and comfort that, yeah, we can let people loose on using confidential internal information with Copilot. We're not concerned about it sort of ending up in the wrong place. The great irony, of course, is that ChatGPT runs on Microsoft servers, so it's all going the same place anyway. But there we go.

Chris Norton:

It does, and I don't think Will's aware of this. So I'm going to share this because this is the bit Pardon, my French, I'm not going to swear, it's blowing my mind, right? We talk about blowing. So people listening to this if they're thinking, okay, I'm up on AI, okay, are you aware, then, people of chat, gpt, connectors, connectors and you're all going what's that? It sounds dull, right. Well, I'll tell you what you can connect to ChatGPT now. Box, dropbox, github, gmail, google Calendar, google Drive, linear Outlook Calendar, outlook, email Teams and SharePoint so literally everything that you use at work, and HubSpot, which we use as our CRM, is all connectable to your chat GPT to use as sources for research. Now, with your, as I said earlier, dissertation in your pocket, that PhD, intelligent little robot, man or woman running off into the sunset, can now access all your emails, all your meetings, all your, if you give it the connections. Is that correct, andrew?

Andrew Bruce:

Yeah, and let's not forget, claude does the same, you know, through MCP. So this is the, the AI at literally any kind of data source you know technically is available. It is much more the well. I can do it. But hang on a minute. What happens, if you know? I point at my internal confidential drive and, yes, it's accessing it and using that. What's OpenAI doing with that? I mean, yes, I could have gone in and it's my data controls and flick the switch to say no, I don't want to improve the model for everyone.

Chris Norton:

I've done, that people I've done that the glorious euphemism.

Andrew Bruce:

Yes, oh, yes, it's great, it's a selfless act. Keep it on and share everything with us so we can use that to improve the next version of GPTpt. Um, so, yeah, it's. I think again this is coming up more and more where the, the actual technological capabilities are just off the scale and continue to advance every single day and our ability just even comprehends the possibilities. But also, you know, how do you sort of go well, great, we could just get on with it, but that would be a bit risky if we potentially expose, exposing ourselves to some risk by, you know, not having the necessary kind of data governance and whatever else, protocols in place. So there's this kind of tension between typically, you know, say, a marketing or comms team going oh, but I want to do all this stuff and I want to do it now, and it going by. You know we don't want to be blockers here, but look, seriously, we can't just let you run off and do that.

Chris Norton:

Um because, yeah, because, just about this, because if you imagine you've connected it to your email, this is the worrying thing. Like so let's say will and I go, bill goes oh, this is great, let's connected it to your email. This is the worrying thing. So let's say, will and I go. Will goes, oh, this is great, let's connect it to my Google Calendar, because I always double book myself. I'm joking and off it goes. Now I can interrogate. I could use ChatGPT. I'm as a business. You're not worried that whoever let's say someone in marketing connects their emails, does that mean everybody on the chat, gpt, in that private network, can suddenly ask questions about that person's emails? There's so many ethics there. You talk about data security, but suddenly can you access them.

Andrew Bruce:

But to be fair, this is the Microsoft message, which is, look, if you go the Copilot route this is why IT departments like Copilot, because it puts a lot more centralized control IT can say, right, we decide who gets to access what. You know, you can see all the calendars because because of your, your role and responsibilities, but you know, other people can't just willy-nilly go off um and and look at other people's calendars, emails or whatever else. So I kind of get that. But that then creates the whole sort of gosh. How do you sit down and figure that out? You know? How do you work out? You know who gets what. Why should they get that?

Andrew Bruce:

Uh, and then coming the other way, when people start to realize, oh, we can just go to IT and ask them you know, we've seen that these custom co-pilots are brilliant. I want one of them, you know. Can you like turn that on for me? Or you know and that depends, of course, whether the organization has actually paid the license to allow those functions and those features to be enabled. Because, fair enough, microsoft's a business, so it's naturally going to say, oh, you want all that? Sure, and here's the price tag if you want all those features and capabilities enabled. But yeah, but you can see, yeah, yeah, sorry, I was going to say the general thrust of it all is clearly a world where literally anything that you want to be accessible as kind of context to interact with a particular tool is well, I say technologically it's you want to do it right now and you want to do it. That isn't the issue. The issue now is more about how do you do that in a way that that that, uh, is safe and mitigates risk.

Will Ockenden:

You know you're, you're speaking to all sorts of organizations. Um, you know, we're obviously reviewing this through a marketing lens, given that it's a marketing podcast. But, um, you know which, which departments, I suppose, in a corporate are you seeing are getting the maximum benefit from this? You know, obviously, marketing and using it. We know that. You know legal finance ops. You know where else are you seeing this being kind of rolled out with big benefits?

Andrew Bruce:

Yeah, I mean, I think that is true. Now there probably isn't any department, an organization that isn't in some shape or form using AI. I'd still argue that on the whole, it is still pretty unsophisticated. I keep reminding myself. I start to think surely everyone knows this, isn't it obvious? Well, clearly it isn't.

Andrew Bruce:

If I'm demonstrating to people, look, if you use a reasoning model, people go what's that? Well, you need to understand the difference between these two different flavors of model. And if you use a reasoning model, these are sorts of use cases that you can put it to. People go oh really, wow, that hadn't occurred to me. Well, why would it? Until somebody suggests that's the sort of thing you can do.

Andrew Bruce:

Imagine getting a reasoning model to develop a highly detailed persona or representation of an audience you're targeting. Then you're getting it to put itself in the shoes of that person, that audience, ask it to develop a thousand messages, Ask it to then rate and rank those messages on the basis of will they resonate with that kind of person? Are they more likely to encourage the outcome you're looking for? Back it comes and you say well, explain how you arrived at that, explain your scoring rubric, which it does, and you yourself can just judge. Does that sound sensible? And then, after a very brief period of time, you suddenly go wow, I've got this massive AI brain to assist me in developing a messaging and a plan allocation of resources, allocation of budget, recommendations on how to go about creating the content. Oh, by the way, now all these AI tools to assist me with developing the right sort of modalities. That's the buzzword about you know the day modalities.

Will Ockenden:

That's the buzzword about you know of the day. Has there ever been research on um? Has there ever been research on the effectiveness of um ai used to develop marketing or creative messaging, versus the purely human route?

Chris Norton:

well, adobe are doing it all the time at the moment. I mean, I'd be fascinated to know.

Will Ockenden:

You know, is it? Is it as good as a very creative human?

Andrew Bruce:

if done, right, yeah, yeah or, okay, I guess I think the nuance is that that, you know, can it be creative enough that that what it allows you as the human being to to, to allow you to to get what you're after within the constraints everybody faces of time, of access to resource, of budget. We all know everyone, everywhere, is being told to do more with less. It's the constant refrain that there's ever-increasing expectations and pressures on, certainly, marketing departments and agencies. Everyone's being told we need to do more, but you're not getting more budget. You're probably getting, at best, uh, the same as you had before, but we have to get more from it. So how do you square that circle? Um, so, uh, as I say, informed and intelligent, uh, you know, use of of ai can go a long way to, to helping to achieve that.

Chris Norton:

Can I ask a question? So, because we're on this committee, I've used all of them and do use all of them. I'm a full pro user of ChatGPT. I've got Copilot and Copilot for me is pretty good but it's adding stuff to itself all the time, but then there's still bits of ChatGPT.

Andrew Bruce:

It doesn't do. Copilot is gpt under the hood, but you're never going to get the latest versions of gpt and there's always going to be stuff, yeah, stuff that that will be in chat, gpt that isn't in copilot but the difficult thing that I've got and I've heard people debating this is that chat gpt has got too many models.

Chris Norton:

There's about nine. Is it more than nine now? It's eight, is it eight? So it's like going to Google and having eight different tabs. Yeah, it's crazy.

Andrew Bruce:

There's a great cartoon meme of the superhero sweating. And then he looks at all these buttons. They're all the different versions of those. It's like, oh God, which one? Which one should I be using? Because they don't. I mean, they've admitted it. Sam Altman said. He says, yeah, you're right, our naming conventions? This is all nonsense. Ultimately, they've said there will be a single model and you won't need to worry about all these flavors. I mean, if you look at Claude, the Claude approach is an indicator. So with Claude, they don't have a separate assistant and reasoning model. There's a single model and if you want reasoning, there's a little toggle and you turn it on. If you don't want reasoning, you don't turn it on. So I think that's a more user-friendly way of achieving the same end, as opposed to you putting the burden on the user knowing oh, look, yes, oh, look, yes, oh, yes, I should be using uh chip challenge 4.5 for that, because that's that's the right one to use for that. And yes, I should be using your o3 pro for that, and so on.

Will Ockenden:

So yeah, I suppose, um, I think we asked you this the first time around when we had you on the show. You know, um, there'll be people listening to this that are quite sophisticated and do get it. And I think you, you know, naturally, the more you use ai tools, the the more you understand the language. You the under, you understand the conventions of how they work um, where should? But, equally, there'll be our listeners thinking, christ, I need to, I need to get on top of this stuff. I've used, you know, I've used, chat, gpt a couple of times. What are the three kind of essential tools then?

Andrew Bruce:

you know, cutting through all the bullshit, cutting, cutting down to the, to the, to the, those essential tools that people need to be using if I really could distill it down to the handful, then, um, yeah, if you're honest, chat, you, you've got to have access to, to at least one kind of foundation, whether it's GPT or Claude or Gemini. I, personally, if people push me and say, look, just give me one, at the moment I probably err towards Google and Google AI Studio, partly because, you know, gemini 2.5 is a very, very impressive model, is a very, very impressive model. To get slightly technical about it, it still has. Well, it might change in the next five minutes, but as I speak now, I think I'm right, it's got, it's changed already Sorry, sorry.

Chris Norton:

Sorry, I'm taking this podcast.

Andrew Bruce:

You're already wrong. It's got the biggest context window, so it's got a 1 million token, so you can 750,000 kind of words of content. Of course it's multimodal, so you know you can give it a video and say analyze the video. Imagine you've got a client who's appeared on a TV broadcast, say, right, analyze that video from the perspective of a member of the target audience. What do you think they think of that spokesperson? Do you think we got the messaging across both explicitly and implicitly? And it does it.

Andrew Bruce:

It's just, it's nuts, because you thought, oh, is it just analyzing the audio? It's like no, no, no, that's what it used to do. Now it understands it as 24 frames, a second video. It understands the entities. It's crazy and it'll. You can do the same for audio, et cetera. So so yeah, ai studio. I think um, um and it's. It's free. Well, it's free to a point. So you're getting access to all of the latest and greatest AI models. You can use the multimodal feature. You can talk to Gemini. You can share your screen and say hey, gemini, can you see my screen? Yes, I can. Oh, I've got our homepage up there. Got any tips? Take on the role of a world-class web design expert and give me some recommendations about how we should tweak that. And it talks to you. It tells you oh yeah, I think the color contrast. From an accessibility standpoint, you might want to reconsider the cut. It's like what?

Chris Norton:

What the whole thing is mental right. Yes, people are saying we're taking, I mean it's really exciting, but are we just talking ourselves? I hate this debate about talking ourselves out of jobs. But the more you talk about it, you're like okay, I'm in-house, I can now do persona marketing, I can now do. You know what I mean you, okay, I'm in-house, I can do, I can now do persona marketing.

Andrew Bruce:

I can now do. You can do everything in-house pretty much, can't you? Um, yes and no, I mean because again the bits is missing from all this, of course is the, the experience you know and the expertise you know of, uh, of the human being. I've said this for a long time, long time that for agencies of any kind, what is going to be the value that you're bringing to the party? I think that sort of in-house trend of more and more work moving in-house because in-house teams feel that well, we're capable of doing it ourselves, because we don't need an external agency to help us with that, and just a kind of changing nature of what is still left to do for the agency. You know what will clients regard as valuable and still need and be prepared to pay for, because of course, that's the big thing is that you know you've got to do stuff that is valued, and valued to the extent that the appropriate price is paid for it.

Andrew Bruce:

We've seen this the last two years. I thought we'd see more of it where clients might go. Well, you know what? Why does it take you four hours to do something, paying by how much time it takes to do it? I'll pay you for five minutes. Surely it should take you five minutes with AI. So I'll pay you for five minutes, but I'm not paying you for four hours.

Chris Norton:

We're not endorsing that approach on this podcast, Andrew.

Andrew Bruce:

No, no, no, but I'm just saying, but we've had a debate about the move from time-based billing to value-based billing for decades. Value-based billing for decades. I'm not saying it's easy, but nevertheless, that transition I think at some point is going to have to come, and I think we are starting to hear more and more in-house teams and clients going yeah, why am I paying four hours to do that? Surely it should, but it's the Whistler's mother analogy, isn't it?

Andrew Bruce:

James Whistler, the American painter, was asked whenever it was Victorian time how, how long did it take to paint that picture? You saw yeah, two days. How much do you think it's worth? And he said I know 10 million dollars, which back then was like a billion. They're saying how can you possibly justify that? It's like ah, you're not paying me for the, the two days it took to paint it. You're paying me for my lifetime of uh, and all the pain and sweat and tears that I've gone through. There's no way I could have painted that picture in two days without all this other stuff. So that's why it has that price tag.

Andrew Bruce:

And I guess, in a similar vein, how do you demonstrate that? Yeah, well, the reason why we can do this in five minutes isn't because I just pushed a button or yeah, I can just push a button. In which case, why do you need me, cause you push a button. In which case, why do you need me? Because you can automate that. It's the bit that you don't know about. You don't see? You know, it's the. We've thought about what we should be asking the model to do for us, or the model can help us get to here, but it can't do everything and it needs both us and it to give you, clients, this, which you know, which is very valuable to you.

Chris Norton:

So, yeah, so if you are setting up a brand today or a marketing team even, would you start with an AI consultant to go in and say, right, how could we build this team upwards to future-proof ourselves? And just look at the processes and maybe we don't need it's not necessarily less jobs, it's how can we, uh, future-proof the business to have a marketing team? That's great work.

Andrew Bruce:

I know some people think it's a bit of a bit of a kind of over-optimistic view, but but I kind of figure that that. Well, why should it be like this? I mean, I think sam altman said look, it's not about replacing jobs, it's about, uh, replacing tasks. Or rather, you know, ai is actually very, very good at doing certain things. So, quite frankly, if, if an ai can, can do that to the same or better stand in the human world, fairly self-evidently you're probably gonna go, yeah, let the air get on with that. But the fact is, of course, that it's not going to do everything. So, yes, that person who once upon a time would have done this, this and this, now is only left with this.

Andrew Bruce:

Well, the decision is well, do you need that role at all? Or that time that's now freed up is used and applied to different things. So, for example, we're hearing more and more about, we're all more and more about, uh, we're all turning into managers because, effectively, if you, you, you don't consider gpt as a tool, it's another colleague. So how do you manage and orchestrate your, your newfound, you know, ai colleagues to support you. So I think that that that's. That's an interesting one, because it says well, actually, yeah, but maybe even junior members of staff now need to start thinking more kind of managerially. Um, you know, whereas in the past you would focus on doing what you do, versus actually, I've already got a bunch of bunch of even more junior, very bright, uh, unwilling, uh, um kind of uh, additional unwilling, additional co-workers here. So how can I get my cohort of AI co-workers to help me do what I'm being asked to do? So I think that's a little thing to consider there.

Chris Norton:

The Christmas party is going to be a bit shit though, isn't it?

Andrew Bruce:

Well, I don't know. It depends how you train your AI avatars. They might. They might be a laugh. Let me put the christmas jokes, I think I think yeah, oh, and they'll be dead good at the karaoke, yeah, yeah then, yeah, get it, you'll have a you'll have a suno, a suno backed ai avatar that will be able to kind of uh, you know, sing any, any, any karaoke track in any style you feel like it.

Chris Norton:

So do you think we'd be able to this in this time? In 12 months time, do you think we'll be able to interview andrew bruce smith's ai synthesized person without? Do you think that will already be there?

Chris Norton:

and I'll be able to, will, and I'll be able to grill an ai version of you, and it'll be more entertaining than you. What do you think? That will already be there and I'll be able to Will, and I will be able to grill an AI version of you, and it'll be more entertaining than you. What do you think?

Andrew Bruce:

I could be wrong on this one, but I mean, you've got the AI avatar tools like Syntesia and HeyGen. So I've got my own AI avatar which I routinely now use. When people say oh, if you're speaking at a conference or an event, you always get asked today, oh, could you, could you just do a quick 30 second promo little video to to events and you're gonna get. It makes it sounds like, oh, I just whip out my phone and I seamlessly kind of say what I need to say and it takes 30 seconds and I send it off to. It's like no, no, I'm gonna think what I'm gonna say, and then, you know, do I get it in one take? No, I don't. It's going to take me probably an hour or so to to get. So it's vaguely usable. And you think, ah, you know, that's, that's, that's. That's actually a lot of time just to generate that little. Oh, get my avatar, give it the script, push the button.

Chris Norton:

There you go um job done, so have you appeared in places as an AI, then, rather than a person?

Andrew Bruce:

Have you not been seeing me on my avatar on LinkedIn recently? Every time I do, I use my avatar to promote it.

Chris Norton:

I'm going to check it out.

Andrew Bruce:

If I haven't seen it, I don't know how I've missed that Well, you see, because you didn't realise it was an avatar.

Andrew Bruce:

I always self-declare the fact that this isn't me, this is my AI avatar, but I look forward to seeing you in person at the real-world events. But to answer your question, yeah, definitely, because we've got to that point where the avatars are available. If the next step is simply well, can we just take those avatars and turn them into real-time interact? We can do it with VoiceNet. If you've got 11 labs, it takes a minute to create a conversational interactive voice agent. Um, so well, well, think about about you know, things like media training. You can create interactive voice agents based on a certain journalist and say, right, here's the profile of the journalist. You know they always they're going to really kind of pressure you and you just talk to it and it talks back and you have an interaction. So you already got that with in voice, so that the components, I think, are already there. Um, I mean, zoom has already announced that they're going to make available you the ability to for you to have your own ai avatar, participate in zoom meetings, that's oh my god.

Andrew Bruce:

Well, they announced that at the end of last year, I think they were.

Andrew Bruce:

That's crazy yes, well, I think that's that's it, but I think we've gone slightly too far. I think that's because that that's that's addressing. It's addressing the wrong problem. The problem is we've got too many meetings. That's because we've gone slightly too far. I think that's because that's addressing the wrong problem. The problem is we've got too many meetings. That's a solution that says oh look, you've got 10 meetings to go to, so you can't go to all of them. Hey, look, you can send your avatar to nine of them. The other nine and you can pick the one you want to go to, and then you send your avatar off to join it.

Will Ockenden:

Imagine being caught in a meeting when you're the only human and there's nine other avatars.

Andrew Bruce:

That's already happened. That's already happened with people. People turn and go. I'm the only person here. The rest are all AIs. Nobody else can be bothered to turn up. They've set their avatars instead, or their AI, their meeting assistants instead, and I'm the chump left talking to myself.

Chris Norton:

To be honest, a lot of the time the avatar would probably do a better job than us. So I mean fair play. Yeah, has your AI not gone rogue, or does it completely stick?

Andrew Bruce:

to the task. I haven't yet experienced my avatar, kind of saying anything inappropriate, or it shouldn't, but I guess that, in all seriousness, that is an inherent concern about the rise of AI agents. Yes, there's lots of excitement about oh great, you know, the agent can go off and do all this stuff and it can autonomously decide to make decisions for you. But you know, openai is sort of pushing their operator feature in ChatGPT or at least the pro version, and they're saying, oh look, you've got your feature in in uh, chat gpt, uh, or at least a pro version, um, and they say, oh look, you get your agent to go off and, uh, do the research your holiday. If it finds the right one, you can just give it your credit card and it'll go and book it for you. It's like what?

Will Ockenden:

oh yeah, am I handing my credit card over to my ai no so if you've got an, if you've got an avatar or an AI agent, is the issue of biases still a concern? You know, let's say you have an AI avatar representing you in 10 meetings a week. What happens if that AI avatar suddenly starts spouting, you know, right-wing propaganda out of the blue?

Andrew Bruce:

Well, in principle, if we take, say, the 11 Labs kind of conversation voice agent as the example, so when you're creating it, you obviously have the opportunity to kind of, like, you know, tell it, here's how I want you to behave. So the extent to which the AI will kind of obey the kind of the system prompt or the context that's provided to it before it goes out there, or the context that's provided to it before it goes out there. But again, how could you possibly foretell or foresee? You know it might, it might you might have told it explicitly. Look, these are all the things I don't want to talk about. But who knows? Because fundamentally behind it all is a probabilistic AI system. I mean, there's a lot of stuff.

Andrew Bruce:

Recently with Claude, there was that research paper that caused a bit of a stir, wasn't it? So the researchers at Claude while they're testing the new version of Claude IV, so one of the examples was where the researcher was talking to Claude and then sort of suggested that it wanted Claude to help him develop a biological weapon. I think that was what it was and Claude was like, well, that's not hard, that's wrong. So Claude, apparently independently, then tried to find the email address of the kind of whistleblowing agency to kind of email it to say, hey, this guy's developing a bioweapon. And then the other one was where the researcher pretended to Claude that he was cheating on his wife and Claude tried to find, find the guy's wife's email address.

Will Ockenden:

Yes, no, yes, oh my god yes yes that's awful yes, more ethical, more ethical than a human being, when you, when you have that and you consider, yeah, if, if it's just that's, that's bad enough, just one agent.

Andrew Bruce:

But imagine when people start talking about, hey, you can have literally armies of agents out there on your behalf doing all this stuff. For it's like, yeah, but blimey, that is like gazillions of them, or in fact, there's going to be more. There's going to be more ai agents on the internet, on the web. In fact, that's another uh uh aspect of all this as well, that when you're building a website today, you're building it for a human being or you're building it for an agent Cause an agent, or some people are saying it's already happening that an AI agent is more likely to visit your website than a human being because of humans.

Chris Norton:

I think that's fake news. I think that's fake news.

Andrew Bruce:

I can't be bothered, humans, I think that's fake news, I think that's I can't be bothered. I can't be bothered. So I've sent my agent off to do stuff and therefore the agent's turned up. So how do you make the website attractive to the agent? Because and again from a you know, seo is is kind of already sort of being completely kind of uh, uh, overturned I wonder if ga would um.

Will Ockenden:

I wonder if ga would um be able to distinguish an agent visiting a website versus a human visiting a website, because it becomes two tiers of web traffic, doesn't it?

Andrew Bruce:

Yeah, exactly Two segments, yes, human versus agent.

Chris Norton:

The term agent makes me think of the Matrix. And what's the agent called in that? Mr Andrews, agent Smith. Ironically, agent Smith there you go and you from you related oops oops, my cover, my cover's blown we've gone really technical but sorry, agents, we haven't really explained what the difference between you know, all the other, the other models but agents and what they can do and some practical examples of what they can do. Yeah, can you just give us some in the market? Okay?

Andrew Bruce:

yeah, I'll be brief. So I think you should distinguish between automation and agents. So with automation, your basis is a set of rules that I want the tool to slavishly follow. Don't deviate. That's it. Agents is this this whole idea of that? There's a degree of autonomy, as in I tell the agent to go and do something, but I'm kind of leaving it up to the agent to figure out for itself how to go about executing that task.

Andrew Bruce:

So deep research is a simple form of agent ai agent. You know, go off and research this client's marketplace. Okay, it's going to go onto the web. Where does it decide? Which websites does it choose to go and look at? Well, it figures it out. It figures out I'm going to go here and here and here and here and here, and it decides what information it will use or won't use and kind of brings it back and we expand that you that to literally any task.

Andrew Bruce:

So if you've tried Manus, which is one of the tools that have emerged this year, so it's a tool that allows you to effectively build your own AI agents. So you go to Manus and say, hey, this is what I want. I'd like you to not only go off and research this client of client's marketplace. But once you've done that, bring back the report, then use that report as the basis for generating a slide presentation. Then you literally brief it, click the button and go off and do something else Make a cup of tea or get on with something else and you know, an hour later it might report back and say, oh, I've done it now and you're going to go and look at what it's delivered back to you. But then you can just see, it'll show you. Here's what I did, here's the process, here's where I went, here's what I did. Here's the tools that I used. And I think that's got kind of clawed in the background as the AI component. But that is a mix of both kind of AI plus automation tools, plus other supplementary tools that are there that the, the AI can sort of say hey, to do that, I should be using this particular tool, to do that, I should be using this particular tool. So, yeah, in a sense, because there's kind of no limit to what you could construct agents to do for you.

Andrew Bruce:

It's not quite agentry, but you've also seen the rise of this concept of vibe coding. Andrew Caparthi came up with this in February. So, hey, it would be great if I had a little app that just did this for me. But I'm not a programmer and there's no way I can afford to pay a developer to do it and it'll take too long anyway. So forget it.

Andrew Bruce:

Tools like google firebase studio you go in and say here, here's, here's the app I want. It goes great, I'll build a prototype for you, what that's like. You watch it writing the code and it comes back says crazy. That is literally nonsense. So from a marketing perspective, I think it's very interesting where we've all I'm sure we all had it. We sat there one day going, oh God, wouldn't it be great if there was just an app just for me that just did what I want you know. But now you can literally brief a tool like that to at least go off and build a little working prototype and, who knows, after 10 minutes you might have something that it's not going to write, a kind of it's not going to build an enterprise app, but that kind of low-level, simple stuff that historically would be a lovely idea but you'd never do it. And now the ability to potentially have build those little sort of supportive apps that help you do what you need to do. I think that's quite cool.

Chris Norton:

Do you think it could? So, just so I can understand this, do you think you could run an agent and say assess this market, look at this, assess my market. How does our website sit in that market? What's missing off our website and in our design that would make us stand out? Look at the points of difference and then create well, that's right. Then code me a wordpress website based on that.

Will Ockenden:

That brief I can hear your brain ticking yeah he knows me very well.

Andrew Bruce:

If you look at, yeah, yeah, we'll go go. Yeah, manas will give you 3 000 credits for free to start to have a play around with it. So you might just give it that as a brief and see how it gets on. I think at the moment though, yeah, one of the kind of gating factors how long can the agent kind of go on for before it kind of like sort of conks out or just kind of doesn't know where to go next? So there's no kind of limitation on sort of giving it sort of a really complex, complicated sort of workflow which might, in the real world, might take a human being hours to execute. But then again, claude, I think they reported they've seen some of their Claude agents work continuously for seven hours at a stretch. So the agent goes off and keeps going, does thing and seven hours later comes back. So that kind of task of do this, do this, do this. Yeah, we're not, we're not.

Chris Norton:

Well, in a sense it's kind of already here, just unevenly distributed does that mean that the agent worked for seven hours non-stop and then it's taking taken its lunch hour at the end of the day? Is that why it's done that? Yeah, it's working flexi time. I mean, I could talk to you for hours about this. I love this subject matter.

Will Ockenden:

I mean each time it's equally fascinating.

Chris Norton:

If people want to get a hold of you or your AI avatar, how can they get a hold of you?

Will Ockenden:

And how can they spot the difference?

Andrew Bruce:

Well, quite Well, because my avatar always self-declares it's an avatar. I think that's an ethical thing to do at this point in time. Easiest place to find me is LinkedIn. Actually Search for Andrew Bruce Smith. There is only one Andrew Bruce Smith on LinkedIn, so that's where I, you know, typically post stuff these days, and yeah. Or the company website Eshmancom. But yeah, either one of those will will find me. Yeah.

Chris Norton:

Excellent. Well, thanks for coming on the show again, Andrew. Really appreciate it.

Will Ockenden:

Yeah, brilliant, thank you for that.

People on this episode