Podchaser Logo
Home
CNLP 656 | James JP Poulter on the Best and Worst Case Scenarios for AI in the Next Decade, Practical AI Tools You Should Start Using Now, And The Pastoral Implications of AI

CNLP 656 | James JP Poulter on the Best and Worst Case Scenarios for AI in the Next Decade, Practical AI Tools You Should Start Using Now, And The Pastoral Implications of AI

Released Tuesday, 11th June 2024
Good episode? Give it some love!
CNLP 656 | James JP Poulter on the Best and Worst Case Scenarios for AI in the Next Decade, Practical AI Tools You Should Start Using Now, And The Pastoral Implications of AI

CNLP 656 | James JP Poulter on the Best and Worst Case Scenarios for AI in the Next Decade, Practical AI Tools You Should Start Using Now, And The Pastoral Implications of AI

CNLP 656 | James JP Poulter on the Best and Worst Case Scenarios for AI in the Next Decade, Practical AI Tools You Should Start Using Now, And The Pastoral Implications of AI

CNLP 656 | James JP Poulter on the Best and Worst Case Scenarios for AI in the Next Decade, Practical AI Tools You Should Start Using Now, And The Pastoral Implications of AI

Tuesday, 11th June 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:01

The Art of Leadership Network. We've

0:03

already seen the impact that COVID and

0:05

the smartphone has had on young people's

0:07

ability to form lasting social bonds

0:09

with one another, to have

0:12

deep conversations and deep relationships. And

0:15

who knows already what knock on impact that's going

0:17

to have 10 years down the road to the

0:19

population, let alone anything else. If young men can't

0:21

find young women to build the

0:23

population with without some other artificial intervention,

0:25

we're going to have problems, guys. That's

0:27

the way it's going.

0:29

And to the ability for you to have

0:32

a virtual girlfriend or a virtual boyfriend and

0:34

fall in love with the chatbot and be

0:36

totally happy with that, I think

0:38

is a relational category that we don't want

0:40

to see emerge. And that's actually probably the

0:42

more likely worst case scenario that

0:44

we are already seeing companies

0:47

building tools, building experiences where you

0:49

can essentially take a step back

0:51

from society and have pretty much

0:54

everything done for you, both

0:56

your work product, your relational product, your

0:59

call it whatever aspect of your life

1:01

you want managed. And we

1:04

end up in a kind of, you remember that

1:06

Disney movie, Wally, right? We kind of end up

1:08

in that kind of all sat slightly in a

1:10

half vegetative state looking like a potato leaning back,

1:12

watching a recycling truck clean up the world. That's

1:16

not a great outcome, generally

1:18

speaking. Welcome

1:23

to the Kerry Newhoff Leadership Podcast. It's

1:26

Kerry here and I hope our time

1:28

together today helps you thrive in life

1:30

and leadership. Man, I am excited to

1:32

talk about all things AI with James

1:35

JP Polter. He has started

1:37

and sold an AI company. We talk

1:39

about, well, everything for the church and

1:42

he's involved in that space. And

1:44

believe it or not, if you've been like a late

1:47

adopter, hey, we got some really practical tools to get

1:49

you started. If you like me are interested in the

1:51

meta issues as well, like what is this going to

1:53

do to us? We can talk about

1:55

that and a whole lot more. Today's episode is brought

1:57

to you by my live event, The Art of Life.

2:00

of Leadership Live. I'd love for you to

2:02

join me in Dallas, Texas September 16th to

2:04

18th. So if you're

2:06

ready to break through what's

2:08

been holding you back,

2:10

you can secure your

2:12

spot by visiting theartofleadershiplive.com.

2:14

That's theartofleadershiplive.com. And today's

2:16

episode is brought to you by my friends

2:19

at 10 by 10. You know they are

2:21

committed to making faith matter more to the

2:23

next generation. You can visit 10X10. That's t-e-n-x-1-0.org

2:28

slash R-D-I to complete a

2:30

free assessment that will measure your

2:32

youth ministry's efforts. So that's 10

2:35

by 10.org/R-D-I to complete

2:37

your free assessment today. Well,

2:40

what is the best case scenario for

2:42

AI in the next decade? What's the

2:44

worst case scenario? What are some practical

2:46

tools you could start using now? And

2:49

what are the pastoral implications of AI?

2:51

I sit down with James Polter. JP

2:54

is the head of AI and innovation

2:56

at House 337, former

2:59

CEO of Vixen Labs, which sold to House 337

3:01

in December of 2023. Vixen Labs is one of

3:06

the leading consultancies focused on

3:08

conversational AI. JP is

3:10

also the founder of Eclizia, a think

3:12

tank focused on the future of AI

3:14

for the church. He was previously the

3:17

head of emerging platforms and partnerships at

3:19

the Lego Group, cool job, where he

3:21

set up the likes of

3:23

Lego Life, the company's social network

3:25

for children, and oversaw a

3:27

number of the group's partnerships with Meta,

3:30

Spotify, and many more. He's

3:32

a sought after international speaker,

3:34

podcaster, and writer on the

3:36

future of AI and voice

3:38

assistance and innovation culture. So

3:40

really delighted to have him on the podcast

3:42

today. Hey, leaders, if you're looking to level

3:45

up your leadership and level up your church,

3:47

I'm hosting my very first conference, The Art

3:49

of Leadership Live in Dallas,

3:51

Texas from September 16th

3:53

to 18th. So conference is going to be

3:56

a little bit unconventional instead of listening to

3:58

eight hours of keynotes each day. day

4:00

and leaving with pages of notes that never

4:02

turn into real results. You've been there, right?

4:04

You get done those conferences. The

4:06

Art of Leadership Live has a really

4:08

cool balance of teaching, connection, and free

4:10

time so you can find the right

4:12

insights with the right people and act

4:14

on them. So yeah, I'm going to

4:16

be giving some talks, but we're

4:19

going to have open and honest discussion with me and

4:21

with other people who are there. Practical

4:23

takeaways. It's an intimate event. We cap

4:25

registration at a very low number. So

4:28

to ensure the right people are there

4:30

with you, the event is by application only.

4:32

There are only a limited number of spots.

4:35

It is close to being full. So

4:38

act now. You can go

4:40

to theartofleadershiplive.com to learn more

4:42

and register before it's sold

4:44

out. Go to theartofleadershiplive.com. Register

4:47

now before it's too late. And then

4:49

in my conversations with a lot of

4:51

you, you've shared about the challenges you're

4:53

facing with engaging young people in your

4:55

church. I get it, man. It's hard.

4:57

I know you're trying everything from outreach

4:59

activities to small group, but do

5:01

you know whether you're actually making a difference

5:04

or not? Well, our friends at 10 by

5:06

10 are committed to making faith matter more

5:08

to the next generation. They're

5:10

offering you a free five-minute

5:12

assessment called the Relational Discipleship

5:15

Inventory. And after you complete the

5:17

survey, you'll get an instant assessment

5:20

that measures your youth ministry's efforts

5:22

against the seven discipleship emphases that

5:25

are proven to help you grow

5:27

in relational discipleship, radically focused on

5:29

Jesus and his love for the

5:32

next generation. So you can

5:34

visit 10by10.org/RDI. That's

5:38

t-e-n-x-1-0.org/RDI. To

5:41

complete your free assessment today,

5:43

that's 10by10.org/RDI, and you can

5:45

get your free assessment. And

5:47

now my conversation with JP

5:50

Polter. JP, welcome to the

5:52

podcast. Okay, thanks so much for having me. It's a

5:54

pleasure to be here. Yeah, so I wanna start in

5:56

the deep end. What

5:58

are the threats? of AI.

6:00

There's like two different views, right? There's

6:02

the benevolent view of AI, and then

6:05

there's the malevolent view of AI. So

6:07

from where you sit, what are the

6:09

threats, the existential threats that AI poses

6:11

right now? Well, I

6:13

think the biggest distinction is that I'm not

6:15

sure that AI is the existential threat, but

6:17

it's AI in the hands of humans that

6:19

might be the existential threat, which

6:22

is probably an important distinction. In

6:24

the church, we have a habit, sometimes a nasty

6:26

one when it comes to technology of looking for

6:29

666 in the code of everything that

6:31

we use, assuming that there's

6:33

something evil lying behind it. I

6:36

don't think that that's what's happening here, but we

6:38

obviously know from all the work that's being done

6:41

across the industry that AI does have the potential

6:43

to radically transform society and particularly when

6:45

put in the hands of those that may not

6:47

want to use it for good, can

6:49

really have some pretty devastating

6:51

impacts on things like the economy, on

6:54

politics, and on the geopolitical

6:56

space. There's a real opportunity that

6:58

things could go awry with AI

7:01

being used by bad actors. But AI itself

7:03

doesn't seem to, at the moment, want to

7:05

come and kill us all, even

7:07

though it might have the potential to do so. So

7:10

I don't think that we are starting the

7:12

conversation from a position that we should be

7:15

fearful of AI. But we are

7:17

in a position where we need to be

7:19

faithful with it and particularly take a considered

7:21

approach to how we use it as Christians and how we

7:23

bring it into the church and everything else that we do.

7:26

One of the things that I

7:28

think about a lot is unintended

7:30

consequences. So if you look at

7:33

social media a decade ago, most people,

7:35

well, maybe not a decade ago, but

7:38

15 years ago as it was developing,

7:40

most people would say, oh, this is

7:42

good. We had no idea it would

7:44

produce the unintended consequences, particularly among teenage

7:46

girls. Gen

7:48

Z with the anxiety, Jonathan Haidt

7:50

has done some incredible work in

7:52

that area this year. Just highlighting.

7:55

I've just got finished reading The Anxious

7:57

Generation. I think we see that those...

8:00

like second and third order consequences just

8:02

couldn't have been anticipated when that technology

8:04

emerged. Well, exactly. And I mean, you

8:06

know, I didn't see it either

8:09

and I was an avid adapter and you

8:11

know, now I realize, oh yeah, this is

8:13

messing with my brain too. So,

8:15

you know, when you think about AI in the

8:18

future, one of these sub

8:20

arguments under how social media is used but

8:22

also AI is it's all

8:24

monetized. Like there's government policy can hardly

8:26

keep up. The EU has done probably

8:28

the best job globally or worst job

8:30

depending on how you look at it

8:33

of regulating technology. But like the

8:36

government doesn't even understand it. And

8:38

I was reading some backstory on Sam

8:40

Altman being kicked out of the

8:43

board for OpenAI earlier this

8:45

year and then being

8:47

brought back in after he was offered

8:49

by Microsoft and it seemed to be

8:52

the backstory there from what I

8:54

can determine. And again, I don't know Sam Altman

8:56

and I don't live in Silicon Valley is

8:59

that it was his seeming lack

9:01

of regard for the

9:04

human consequences of AI and really

9:06

the race to be first to

9:08

market, dominant market, profit, profit, profit.

9:11

Any thoughts on that when really you

9:13

can make an argument with the magnificent

9:15

seven that it's just

9:17

a race to shareholders, greatest profitability to

9:20

be first, et cetera. Is that what

9:22

you mean by the existential threat being

9:24

in the hands of humans? Or

9:26

what's your take on monetization and all

9:29

of this? Well, so

9:31

the existential threat problem mostly comes from

9:34

these models getting out of hand and

9:36

into the hands of those that might

9:38

use them for nefarious purposes. We think

9:40

about things like fake news, mass generation

9:43

of content that may affect

9:45

elections. Obviously we're in the year, as

9:47

we record of the elections. And

9:49

depending on when this comes out, we'll see how many of

9:52

those it's been affected by AI. But I would guess a

9:54

lot. Whether or not there's a

9:56

real effect or at least a correlatory

9:58

effect that we point out. and

10:00

say, hey, there was fake news in the

10:02

ecosystem. It'll be hard to make an argument

10:04

with anybody that that's not really been the

10:06

case. Whether or not it's had any real causal

10:08

effects on the outcome of

10:10

elections, that's yet to be seen. But certainly, that it'll

10:13

be out there and it'll be something that people point

10:15

at. And that in itself is disruptive, right? When

10:18

something like this new technology comes into the space,

10:20

long before, think back to Cambridge Analytica, think

10:22

back to what happened with Facebook. Long before anyone found

10:25

out there was something going

10:27

on under the hood there, people were still

10:29

pointing at Facebook and pointing at many other

10:31

social platforms and saying, this is going to

10:33

have a negative effect on our elections or

10:35

on the outcome of other social issues. And

10:38

I think we can already do that with AI

10:40

in an unchecked, unregulated way. Now, the

10:42

good thing is I do think global regulators

10:44

are learning from what's happened in the social

10:46

media space. And as you

10:48

say, in the EU and in the US and around

10:50

the world, people will move fast to

10:52

try and bring some kind of regulation in. But

10:55

the big difference here is that this isn't a

10:58

channel that we're talking about. AI isn't something

11:00

that has just become one person's job in

11:02

the company or one person's job in the

11:04

church. This affects everything. It's affecting all the

11:07

tools that we use. Now, even the things

11:09

we're using to record this podcast or the

11:11

computer that you're using today, certainly

11:13

the phone that you're going to use in the next 24 months is

11:16

going to have some kind of AI living inside of it. And

11:18

until we all have a better understanding of what that

11:21

means for us, we're not going to be turning it

11:23

off by default. We're most of the time going to

11:25

be opted into it. And so we

11:27

could accidentally, if we're not careful, sleepwalk into

11:29

another type of experience that we've had with

11:31

social media where we are falling foul to

11:33

giving our data over to platforms that may

11:35

want to use it for different purposes, even

11:38

if that's just to monetize it with us

11:40

not knowing about it, even if that's just

11:42

the worst thing that happens, that's still something

11:44

that could be happening to us that we

11:46

might not want to be participating in. And

11:48

I think that's why we've got to be

11:50

cautious. Strange

11:53

analogy here, but I

11:55

just interviewed William Urie, who will

11:57

be on the podcast. So he wrote the infamous Getting Out

11:59

of the World. and TS Fisher and Yuri, and

12:02

has been involved in really

12:05

the biggest negotiations in the last 50 years. And

12:07

although we didn't spend a lot of time on

12:09

it in the conversation, remember talking

12:11

to him off mic about the arms race, like

12:13

the start talks, the salt talks. You're probably too

12:15

young to remember those in real time. I was

12:18

in college, I remember them. Right about them. And

12:20

there was the arms race, right? Like in the

12:22

80s, that was a very real thing. And

12:25

it looked like, you know, there was

12:27

the mad pack, the mutually assured destruction that

12:30

if the United States or the Soviet Union,

12:32

now Russia, press a nuclear button,

12:35

the planet was blown to smithereens

12:37

in minutes. And we came close

12:39

a few times and then that

12:41

decelerated. So do you see

12:43

right now, we can have another race

12:46

in technology to the top. It's profit

12:48

driven. It's not driven by states and

12:51

governments, it's driven by private

12:53

industry. Do you see

12:55

any kind of, and I know you work in

12:57

this area, hence the question, any kind of, not

13:02

external ethical counsel, but internal

13:04

where OpenAI sits down with

13:07

Anthropics, sits down with the

13:10

thousands of startups, with Meta and

13:12

says, hey, what are we doing

13:14

here that will benefit humanity?

13:16

Do you see any of that happening? Or

13:18

is that really left to outside actors? Well,

13:22

I think that it's definitely coming because the pressure

13:24

is there for them to all be on the

13:26

same page. The challenge obviously with the commercial side

13:29

of it is that none of them want to

13:31

share their toys, right? They all wanna keep their

13:33

own little special source. And that's understandable too. I

13:36

mean, the AI wars, they're basically becoming a

13:38

proxy for the cloud wars that we've seen

13:40

over the past decade of, the Amazons, the

13:42

Googles, the competing, but the difference is that

13:44

they've all been leapfrog by some of these

13:47

startups that they couldn't have anticipated at the

13:49

time. And hence the investments that are coming

13:51

in left, right and center. What's really

13:53

interesting, if you look at the board makeup of

13:55

many of these companies, they've got people sitting across

13:57

them from all these different companies on

13:59

each other's. boards, they're building up different teams.

14:01

You've got people like from Google that are

14:03

now on the Anthropic board, you've got Amazon

14:05

investing in them as well as Google investing

14:08

in them. And so

14:10

it does make the landscape quite muddled, but

14:12

they all are essentially moving towards the same

14:14

thing of for profit as the means of

14:16

growing these things. And there's nothing necessarily intrinsically

14:19

wrong with that. But

14:21

the accountability is the thing that we want to

14:23

see. And obviously, that's the work that's being done

14:25

by the EU with the AI act that's just

14:28

been passed, is what's going through

14:30

Congress at the moment. I'm sure what will be coming

14:32

out in both the Canadian and the UK governments

14:34

as they try and bring something that's of parity

14:37

to those registrations

14:39

and legislations. But

14:41

the challenge is how will we as a

14:43

society respond to this? I think that's the

14:46

bigger problem because governments, they're

14:48

great at regulating big companies. It's much easier for them

14:50

to do because there's less people they have to talk

14:52

to. But I kind of look at

14:54

it as the analogy of when we wanted to

14:56

make cars safer, we had to do two things

14:58

at the same time. You had to go to

15:00

Ford and GM and Volvo and everyone and say,

15:02

hey, put seatbelts in the car. And

15:04

that was one part of the problem. And then the other part

15:07

of the problem was telling all of us to put the seatbelts

15:09

on when we got in the car. And

15:11

that's the bit that seems to me is

15:13

missing. We're not seeing an awful lot of

15:15

work by governments, regulators, but other public bodies

15:18

to make sure that Kerry knows what he's doing

15:20

with his data and JP knows what he's doing

15:22

with his data and just put that on a

15:24

higher pedestal than we do right now. And

15:27

we see this all across the place. And

15:29

you mentioned Johnson Heights work around

15:31

the anxious generation. We're just handing over

15:33

so much of our data in the

15:35

form of our smartphones. And particularly we're

15:37

handing over those smartphones to our kids

15:40

without training them on, hey, this is what's

15:42

going on with the data that you're giving

15:44

away when you sign that little tick box

15:46

without reading the terms and conditions. That's

15:49

where we need to be paying more

15:51

attention. So does anybody read the terms

15:53

and conditions? Like seriously, I know, I

15:55

know. And then it's accept

15:58

all cookies, decline all cookies. or

16:00

worse, manage your preferences. And I've hit the

16:02

manage your preferences. And I'm

16:05

led into this confusing landscape that

16:07

I'm just like, all right, I'll

16:09

just accept them. What's at stake

16:11

in moments like that where we're

16:13

accepting terms and conditions and

16:16

cookies. And I know that Google's gonna

16:18

change that massively this year

16:20

as well with the death of third party cookies. Well,

16:23

I mean, the history of the

16:25

internet is us giving up privacy

16:27

for utility, right? If we find

16:29

enough utility in anything, we'll give up some privacy

16:31

for it. Every time that you want an Uber

16:34

to arrive and find you on the street, you're

16:36

giving up a little bit of privacy for that

16:38

utility. The same if you want your Shopify account

16:40

to remember your login details, or if you want

16:42

to use meta products to browse what your college

16:44

roommates are up to. Every time

16:46

we do that, we're giving up some level

16:49

of privacy for that social utility back. And

16:51

the history of, as I say, the internet

16:53

says that we will continue to do that

16:55

because as humans, just by our very nature,

16:58

we're quite lazy animals. We like to find straight

17:00

paths between point A and point B. And

17:03

as long as we're those lazy animals,

17:05

we'll continue to find ways, even self-inflicted,

17:08

of exploiting that laziness to

17:10

get what we want. And that's where

17:12

the biggest problem is, is that most often

17:14

these products are not particularly clear about what

17:16

they're doing with the data. And

17:18

even if they are clear with what they're doing with

17:20

the data, they're moving at such a speed that we

17:22

can't really know five, 10 years from now what

17:26

this is gonna look like. The AI that we

17:28

have today, as our mutual friend,

17:30

Kelly Zhang says, is the dumbest it will

17:32

ever be. Yes. Right? Yeah.

17:35

And that's the thing that we can't anticipate. If you even think

17:37

about the advancements we've seen in the past 48 months from

17:41

these quite simple language models, things like

17:43

Alexa, things like Google Assistant, then we

17:45

get the first version to chat GPT

17:47

that can just about write something interesting.

17:50

But then these leaps and bounds to the video

17:52

and the imagery that they're able to create. If

17:55

that curve continues to grow exponentially, five

17:57

years from now, 10 years from now.

18:00

now, we can't even imagine what these

18:02

things are going to be able to do. And

18:04

they're all learning from the data that we're giving

18:07

them right now. And they'll

18:09

learn from the data that we create using

18:11

these tools in the years to come. And

18:14

so that's why we need to take more

18:16

accountability for what we're using them for. Well,

18:18

this is relevant on an individual user basis,

18:20

but also for all the leaders who run

18:23

organizations whose organizations are collecting this kind of

18:25

information too, right? So two

18:28

questions for you. Number one, what do you do?

18:30

Do you accept all cookies? Do you decline some?

18:32

Do you manage your preferences? What do

18:35

you do personally with that? Knowing what you know,

18:37

which is probably more than

18:39

most of us listening to this podcast and

18:41

certainly more than the person conducting this interview.

18:45

I think you're more educated than you let on.

18:47

Oh, I do. I would say. A little knowledge

18:49

is dangerous, J.G. It's very dangerous. I

18:52

try. I think I try, but I'm not going

18:54

to say that I'm not guilty of this myself

18:57

because I too want all of those things. But

18:59

I would be particularly when it comes to these AI

19:01

tools that are now being built on top of the

19:03

language models, let's move like the social networks and other

19:06

things like the iTunes privacy agreement

19:08

to one side and think

19:10

about particularly these AI tools. One

19:12

of the things I'm always doing is looking at what

19:14

models are they built on top

19:16

of. You've obviously got things like chat

19:18

GPT, which we all know many people

19:20

might have tried out something like Claude

19:23

or perplexity. And then there's the new

19:25

AIs that tend to feel more friendly

19:27

like the Pi from inflection. Some

19:29

of these tools you guys may find or you'll find in the

19:31

show notes, I'm sure. But these new

19:34

types of tools, they're not

19:36

always the experience that you're having

19:38

with them at least is not always built

19:40

by that same company. We're seeing lots of

19:42

startups, lots of people building new experiences that

19:45

you can use on your phone or on

19:47

the desktop that are wrappers on top of

19:49

other language. It's like a skin, right? Exactly.

19:52

It's a skin with some prompt engineering

19:54

and... Specialized utility.

19:57

If you don't know what model is underneath that...

20:00

you don't know where your data is ultimately going to

20:02

and what it's being used for in the future. Now,

20:04

personally, I don't think there's anything particularly

20:07

nefarious going on at any of these

20:09

companies right now. There's no

20:11

evidence to suggest that from what I'm seeing.

20:13

They all ultimately, as you've alluded to before,

20:15

are commercial enterprises and they want to make

20:18

money. The minute that they lose

20:20

our trust is the minute they stop making

20:22

money. There

20:24

is that kind of interesting tension that the

20:26

commercial aspect of all this holds. It

20:29

doesn't mean that the companies building on top of these things don't

20:32

want to do things that either are not, like

20:35

I say, bad in themselves. But they may

20:37

not be things that we necessarily want to

20:39

support as a society and particularly within

20:41

leadership or within church contexts. These

20:44

are the issues that I think we want to pay more

20:46

attention to because just because something can be done

20:49

doesn't necessarily mean that it should be done or

20:51

certainly that we would want it to be done.

20:54

So when you think about the future, I like

20:57

to think in terms of trajectory

20:59

or trendlines, JP. So

21:01

if you look at the decline of the church

21:03

over time, there's a trendline. And

21:05

even after COVID, I was reading a few graphs

21:07

recently. Basically there's an interruption pattern

21:09

interruption, and then it continues on the

21:12

historic trendline down, which is really too

21:14

bad. Other companies like the

21:16

rise of tech, market capitalization or user

21:18

share. The trendline is here's 2010, here's 2015,

21:20

here's 2020, here's 2024. So

21:26

with no current, no future intervention

21:28

beyond what we see now, I'd

21:32

like to go into the worst case for

21:34

AI and the best case for AI. And

21:37

I know, ask 10 people, get 10 opinions. I get

21:39

it. But I would love your take as

21:41

somebody who's literally built and sold companies in

21:43

AI and who sits

21:45

on a board and has your

21:47

resume. What is the worst case

21:50

that we could imagine a world

21:52

five, 10 years down the road in terms of

21:54

AI and where it's going? Well,

21:57

if anyone has watched the recent Mission Impossible.

22:00

movies, they'll have seen what looks

22:02

like potentially the worst-case scenario, which

22:04

is some kind

22:06

of large-scale AI emerges as

22:08

being... I would

22:11

not say sentient because I'm not sure

22:13

that that's actually possible within our theology,

22:16

but something that emerges with a super

22:18

intelligence that is beyond our control and

22:21

starts deciding that either it's going to

22:23

align itself with those that

22:26

would seek to see the downfall

22:28

of society or the end of the Western world or however

22:30

you want to put it, or that

22:32

it just makes that decision in and of itself

22:34

and says, you know what, humans, you're not doing

22:36

a great job looking after this planet. Maybe we

22:38

would be better off without you. And

22:40

that argument has been made by many. The

22:44

likelihood of that happening, I think, is far

22:46

off because I think there are many steps

22:48

between here and there before we get anywhere

22:50

near that being a challenge. But it's not

22:53

entirely impossible. And

22:55

because it's not entirely impossible, we should at least give it

22:57

some level of attention, if

23:00

nothing else, to make sure it shapes the way that

23:02

we follow policy and that we keep away from those

23:04

things in the same way that we did with nuclear

23:06

weapons, which is often the comparison.

23:10

And probably more importantly, the way that we didn't

23:12

really do it with social media, which is probably

23:14

a more accurate comparison in terms of the actual

23:16

net impact that it's had on people around the

23:18

world. So I think in terms of worst case

23:20

scenarios, that's kind of out there in the existential

23:22

threat category. But what I think is more the

23:25

likely worst case scenario is

23:27

that these models continue to grow

23:29

unchecked and that we just begin

23:31

to adopt them into society in a way that we

23:33

just wouldn't want to see happen. I think the one

23:36

area that I'm particularly passionate about is in

23:38

relationships. We've already seen the

23:40

impact that COVID and the smartphone has

23:42

had on young people's ability to form

23:45

lasting social bonds with one another, to have

23:48

deep conversations and deep relationships. And

23:51

who knows already what knock on impact that's going to

23:53

have 10 years down the road to the population, let

23:56

alone anything else. If young men can't find

23:58

young women to build population

24:00

with without some other artificial intervention, we're going

24:02

to have problems, guys. That's the way it's

24:04

going. Add to

24:06

the ability for you to have a virtual

24:09

girlfriend or a virtual boyfriend and fall in

24:11

love with the chatbot and be totally happy

24:13

with that. I think it's a

24:15

relational category that we don't want to see

24:17

emerge. And that's actually probably the more likely

24:19

worst case scenario that we are

24:21

already seeing companies building

24:24

tools, building experiences where you can

24:27

essentially take a step back from society

24:29

and have pretty much everything done for

24:31

you, both your work product,

24:33

your relational product, your call it whatever

24:37

aspect of your life you want

24:39

managed. And we end up

24:41

in a kind of, you remember that Disney movie,

24:43

Wally, right? We kind of end up in that

24:45

kind of all sat slightly in a half vegetative

24:47

state looking like a potato leaning back, watching a

24:50

recycling truck clean up the world. That's not a

24:53

great outcome, generally speaking.

24:56

And that's before they all had like AI friends,

24:58

they at least were talking to one another. So

25:00

I think that's where we could

25:03

end up in some form. And that

25:05

is to me a more likely worst

25:07

case scenario that necessarily something

25:10

releases all the nukes or turns off all of

25:12

the energy grid. Yeah. And so sort

25:14

of a general social

25:16

degradation over what we have now.

25:19

And for regular listeners, Scott Galloway

25:21

is on this here again. I

25:24

don't know exactly when compared when all the

25:26

episodes will be released, but he sees that

25:28

as a massive threat

25:30

to our culture is disengaged

25:33

young men. He says that's

25:35

where things go bad.

25:37

All right, now be the optimist. I'm

25:39

sure you've got an optimistic framework. Again,

25:41

all things being equal, nothing in the

25:44

future radically changing from the current trajectory.

25:46

What is the possible best case scenario?

25:49

Well, I think this is interesting. You recently

25:51

have Morgan on your show, I think talking

25:53

about kind of the things that don't change.

25:55

Yes. Yes, I'm going to have him. Yeah.

25:58

Or you're going to have him on and yeah. And

26:00

you know, he is

26:02

his recent book, I think is really fascinating because

26:05

he told about the things that that don't change

26:07

over time. And I think that's

26:09

actually more likely. There's a bunch of human characteristics

26:11

that don't change as much as we're worried about

26:13

things like a girlfriend's and stuff. I actually think

26:15

that the overwhelming majority of us still find

26:18

the pull to be with one another in

26:20

the physical space. The real

26:22

optimistic view of AI is that it can

26:24

just take away so many of the drudgery

26:27

tasks that we all have to deal with.

26:29

Yeah, particularly in the ministry

26:31

context or work context. You

26:34

know, we spend so much time administering

26:36

the work we do rather than doing

26:38

ministry. Or we spend so much time

26:40

on the busy work that is often

26:42

called rather than actually being busy working

26:45

on the thing that we want to focus on. And

26:47

AI does hold the promise to be able

26:50

to eliminate vast amounts of that work, days,

26:52

if not weeks, a year of that, you

26:54

know, filling in a spreadsheet or entering

26:56

a form for the 17th time that was built

26:59

on Windows 97 or whatever it might be. Like

27:01

these things are out there still.

27:04

And as much as the technical world has

27:06

been revolutionized, you walk into any public municipal

27:08

building or hospital or school and you'll find

27:10

technologies that have been leapfrogged time and time

27:13

again still running in, you know, many of

27:15

these systems. And AI

27:18

potentially could come along and if not replace those

27:20

technologies, just fill those things in for us, which

27:22

would make things a lot easier for most of

27:25

our lives. So administration, I

27:27

think, is going to be one of the big things that

27:29

maybe we do end up in the world of being able

27:31

to do the four day work week and, you know, living

27:33

that kind of dream that Tim Ferriss has been selling to

27:35

us for a number of years. That we

27:37

actually might have some lifestyle, you know,

27:40

benefit over time. I think that's one of the big things. But

27:43

it's also imperative, right? Like if we want to

27:45

solve some of the biggest issues in the

27:47

world, climate change, cancer, other

27:49

kind of diseases, and

27:52

also make sure that our politics is actually something

27:54

that can sustain itself over time. We

27:56

haven't done a great job of that as of late as the

27:58

human race since the industrial era. revolution. And

28:01

those problems are now so big that the

28:03

speeds required to fix them doesn't seem to

28:06

be possible in and of ourselves. And

28:08

so my optimistic view would be that we

28:10

will see massive breakthroughs whether it's the work

28:12

that's being done on things like protein folding,

28:15

or the discovery of new drugs, the

28:17

identification of new cancers, the solving of

28:19

the climate race, or emergence of

28:21

things like small nuclear

28:24

fusion and nuclear fission reactors, which will give

28:26

us boundless energy sources. We're not going to

28:28

get to those things about artificial intelligence.

28:30

We don't have the collective brainpower on

28:32

our own. And so I

28:34

think that we could begin to see the

28:36

new renaissance begin to emerge over the next

28:39

decade as these technologies get

28:41

smarter, so long as the negative

28:44

opportunities don't outweigh the positive ones,

28:47

and that people continue to seek

28:49

for human flourishing rather than the

28:52

rather commercial, sad

28:55

story that is being told at the

28:57

moment, just flooding the internet with garbage

28:59

content. That's what seems more likely right

29:01

now. But I'm hopeful that

29:03

we might see a new generation

29:06

of technologists arrive that say, wow, we can

29:08

use this stuff to do immense good. We'll

29:11

probably toggle between the macro and the micro,

29:13

but to dip down into the micro for

29:15

a moment, what are some of

29:17

the most interesting uses

29:20

of AI that you're seeing right now?

29:22

And I'd love you to take that

29:24

in two phases. Number one, just what's

29:26

on your phone, what's on your laptop

29:29

that you're finding absolutely fascinating on

29:31

a micro level, and then other

29:34

technologies that perhaps you haven't discovered

29:36

personally yet, but you know are out there.

29:38

What's got your attention? Well, I mean, the

29:41

app that probably gets more attention on my

29:43

phone, if I think about it here than

29:45

any other, is perplexity

29:48

at the moment. As a search engine, it

29:50

has almost entirely replaced my use of Google.

29:52

This is not an endorsement for them. I

29:55

don't have a particular affiliation. Nothing

29:58

of that nature, but I would just say. from

30:00

straight out usage, it's amazing

30:02

how quickly that has replaced things.

30:06

A story from this week, a client

30:08

rang us out of the blue at Vixen Labs, the

30:10

agency that I run, saying,

30:12

hey, I got in touch with you because I found

30:14

an article about the very specific thing I was looking

30:16

to do with AI this week. And you were the

30:18

top answer ranked on complexity because it

30:20

was found very specifically the article he

30:23

was looking for in a way that

30:25

Google had no ranking for us whatsoever.

30:28

And even if you take that in

30:30

microcosm, it's a really good example of

30:32

how these new tools are helping find

30:34

much more contextual information. So it's amazing.

30:38

If nothing else, because I can ask it

30:40

questions and ask it follow-up questions and it

30:42

seems to do a better job of looking

30:44

at stuff than I can do on my

30:46

own. Just Google search has so degraded over

30:48

the last couple of years. I know they

30:50

keep making changes, but it's not functionally useless,

30:52

but it's approaching that. And

30:55

once these Gemini models begin to roll out

30:57

into mainstream Google search, which we can only

30:59

anticipate they will, then I think there's a

31:01

real chance Google catches up. But for now,

31:03

for me, I spend an awful lot

31:05

of time in perplexity for anyone that

31:07

makes that slide where or content for the

31:10

internet, which is sure is all of us

31:12

that get stuck in PowerPoint from time to

31:14

time. I've been using tools like beautiful.ai, which

31:16

has an amazing ability for, you know, if

31:19

you're a pastor or an executive leader of

31:21

some description right now having to put together

31:23

this year's annual report or quarterly earnings, being

31:26

able to design a slide deck which

31:28

has the Claude model from Anthropic running

31:30

inside of it. You can

31:32

create an entire 20 slide presentation with one

31:35

line text prompt. It does

31:37

the layouts for you. It goes to Dali and

31:39

creates images for you. It chooses the most appropriate

31:41

slide layout like it's an amazing tool. Again, not

31:43

aligned with them, but just that's the one that

31:46

I spend literally hours in

31:48

every single day. But perplexity and beautiful.ai.

31:51

Yeah, for sure. Those are two. I'll be checking those out. Those

31:53

are new to me. I've heard of them, but I haven't done

31:55

anything with them. Yeah, they're definitely great

31:57

places to start. And if you are already using.

32:00

something like chat GPT, you can access the

32:02

GPT-4 models, you can access the Claude models

32:04

inside of complexity as well. So you can

32:06

switch around and pay one subscription price. Is

32:09

it just me or is Claude significantly

32:11

more intelligent than chat GPT-4 right now?

32:15

It's certainly more eloquent, not necessarily more

32:17

intelligent, which is really interesting. What

32:20

we're beginning to see with some of

32:22

these models, they're bringing personality in some

32:25

ways. Claude model is particularly good at

32:27

long form written content. That's probably why

32:29

I like it. Yeah, well

32:31

for you exactly and for myself as well. It's

32:34

much better at emulating my voice when I

32:36

give it examples than chat GPT is. And

32:39

it sticks to the task for longer

32:41

than kind of wandering off and doing

32:43

something strange, which GPT can kind of

32:45

be prone to do. So yeah, I

32:47

would definitely say Claude is also

32:49

a really good place to play around with, particularly

32:51

if you're a content creator, if you're a writer,

32:53

if you're someone that writes press releases or social

32:55

media content or biographies or whatever

32:58

it might be, anything that's longer form,

33:00

Claude is probably your go-to model right

33:02

now for doing that well. And what's

33:04

really promising is those models are getting

33:06

small enough to be able to run

33:08

on a phone offline, which opens up

33:11

some really interesting opportunities in places where

33:13

connectivity is bad or

33:15

parts of the world where AI tools

33:17

would be pretty hard to justify

33:19

just because of the amount of data that they use. So

33:22

I'm particularly excited about where they might go.

33:25

Yeah, that's interesting. So again, a one-on-one

33:27

question, and by the way, for Canadian

33:29

listeners, most are American. You

33:31

can't get Claude, at least at the time of recording

33:33

this, but a VPN will work you around that pretty

33:35

easily. Yeah, well, yeah. And you can

33:37

also access it via some of these other tools as well,

33:39

which is again, the thing we were

33:41

saying before about some of these tools, they have

33:44

other models inside of them. So yeah, your mileage

33:46

might vary, but play around. And a VPN is

33:48

a great idea anyway on public internet, but beyond

33:50

that, that's a virtual private network. Claude, by the

33:53

way, is C-L-A-U-D-E.AI. And

33:57

what it, okay, so help me

33:59

understand as. somebody who does not code

34:01

and someone who is not into AI deeply. ChatGPT

34:04

is an LLM, Clode is an LLM

34:06

as well, large language model. Is that

34:08

right? But they're trained on

34:10

different things. Like I don't understand, because

34:12

they are very different. Like

34:14

user interface, Clode is like, oh, that

34:16

was really helpful. ChatGPT is one more

34:19

time please with feeling. You know? Yeah.

34:21

So it's helpful to understand the difference

34:23

between a large language model and the

34:25

app that you might use that thing

34:27

in. Yeah. Clode

34:29

has, there are a number of different Clode

34:31

models. It doesn't help that they're called Clode

34:33

as well, but there are a number of

34:35

different Clode models. If you go to clode.ai,

34:37

you can interact with the Clode model or

34:40

at the time of recording, it may be

34:42

a different one from when you're listening to

34:44

this, but the Clode 3 model or Clode

34:46

4, Clode 5, whatever comes down the line.

34:48

So the language model

34:50

itself is trained on a massive

34:52

amount of human created data, some

34:54

synthetic created data living on the

34:56

internet. And depending on who you're

34:59

getting that model from, they've sourced it from

35:01

different places. Some have more commercial agreements in

35:03

place, others are just scraping stuff that's in

35:05

the public domain. But the big

35:08

difference is the way that they're trained and tuned

35:10

to behave in different ways. And then when you

35:12

come and use them inside of a tool, for

35:14

example, inside of ChatGPT, you're using

35:16

what is probably the GPT 3.5 or GPT 4 model. Those

35:21

different models have different amounts of data in

35:23

them, different knowledge, and they behave in slightly

35:25

different ways, but you can access both of

35:27

them inside of the ChatGPT app. And

35:30

it's the same models that if you

35:32

use BingChat, for example, or you're using

35:34

Microsoft Copilot, you're using the GPT 4

35:37

model inside of those tools.

35:40

And they have different abilities too. So Bing Search is-

35:42

That's sort of the stream idea, right? Yeah,

35:44

but so Bing Search uses the same model that

35:46

ChatGPT has, but it has the ability to go

35:48

to the internet and pull in real time results

35:51

and blend it with the content that the

35:53

model can produce. The same thing that Perplexity

35:56

is able to do when they're using their

35:58

own Perplexity search model and clause. isn't

36:00

able to do in and of itself because

36:02

it doesn't have live internet access. So they

36:05

all have slightly different abilities. It's

36:07

kind of like having a range of different assistants that

36:10

you might employ to different jobs. Some

36:12

of them are going to be better at different things. And

36:14

so chat GPT is particularly good at tasks

36:16

and handling data. It's particularly good at writing

36:18

code. Claude is much better

36:21

at linguistic tasks, long form textual

36:23

tasks. And neither of

36:25

those produce images, but that's why you have

36:27

things like DALI from OpenAI or Mistral.

36:30

And there are many others that can produce

36:32

visuals as well. Okay. So just

36:36

from a basic thing, Anthropic, which

36:38

is the company behind Claude, are

36:42

they working totally independently from

36:45

chat GPT? There's no master data source.

36:47

These are two separate companies that have

36:49

developed themes from scratch. As far as

36:51

we're aware. As far as

36:53

we're aware. Let's not get into conspiracy theories.

36:55

Well, no, but it is an important thing

36:58

to highlight that none of these companies publish

37:00

exactly what data has been trained to make

37:02

them work. Not in full

37:04

anyway, even the open source ones. Now you

37:06

can begin to get an understanding of what

37:09

might be in there because we've seen examples.

37:11

And this is the basis of the court

37:13

case being held by the

37:15

New York Times against OpenAI at the

37:17

moment is that clearly chat GPT is

37:19

able to produce, well, at least was

37:22

able to produce long form examples of

37:24

what seemed to be exactly copies of

37:26

New York Times articles. Therefore, you can

37:28

assume it was in the training data.

37:31

It may no longer be. And

37:33

so each of these different models, they have slightly

37:36

different training data that goes inside of them. And

37:38

that's what makes them better or worse at certain

37:40

tasks. There are some models that are trained, for

37:42

example, just on scientific papers. And so scientific papers,

37:44

if you're going to write them, they have a

37:46

very specific linguistic style, a way of referencing, et

37:48

cetera. And so you would want

37:50

a model that was trained on that. And this is where we see

37:52

a lot of companies going right now is

37:55

building on top of these language models with

37:57

their own data sets. So you've seen examples

37:59

from Blue. Bloomberg doing it with financial data,

38:01

or McKinsey, the consulting firm, doing it with

38:03

all of their consulting content. And we're now

38:06

seeing in the Christian world, Bibles being built

38:08

on top of just seeing the release this

38:10

past couple of weeks of Bible.ai, which is

38:12

a project that's trying to build a large

38:14

corpus of Christian content to

38:16

train a foundational language model so it's

38:19

better at producing biblical related content. So

38:21

we're going to see examples of that

38:23

type of thing emerge as well. It'd

38:25

be fun to see him merge so

38:27

many different theologies into one that'll be

38:29

absolutely fascinating to watch. And that's

38:31

the problem, right? Because first of all, you want to

38:33

put something in them, you've got to agree with what

38:35

we agree on. Which is

38:38

not always the easiest of things. Okay, I'm going

38:40

to have to track that one. Now that's a

38:42

good survey of the landscape, it really is, JP.

38:45

You specialize in conversational AI.

38:47

What is that? So

38:50

yeah, so we started Vixen Labs, which is the agency

38:52

I've run for the past five or six years, really

38:55

with intent around making experiences

38:58

of talking with the companies and organizations that

39:00

you have to do business with every day

39:02

a little bit better through

39:04

talking to them rather than having

39:06

to click and scroll and swipe

39:08

down. Is this like instead of

39:10

press five? Precisely. And

39:12

so we've been really focused on things

39:15

like voice technologies, we spend a lot

39:17

of time building applications for Alexa, for

39:19

Google Assistant over the past couple of years. And

39:21

now that's emerged into these large language models

39:24

that can help us build things like chatbots

39:26

and voice assistants that are just much more

39:28

pleasurable to use and actually helpful than

39:31

they have been historically, more than just

39:33

doing timers and reminders, which

39:35

is what they've been good for for a little while. So

39:37

yeah, we focus on that because I think

39:39

we want the future of using the internet

39:41

to be one that's just a little bit

39:43

more personal and more human,

39:46

ironically. Which

39:48

we are conversational animals, right? That's how

39:50

we get things done. You don't

39:52

roll up a McDonald's drive-thru in order

39:56

to have a chat with someone. But

39:58

the means that you do so... the

40:00

means you get that burger delivered into the car,

40:02

or whatever your order is, is through

40:05

the art of conversation. And that's the way

40:07

we've always done things as humans. We believe,

40:10

certainly from a Christian perspective, we

40:12

began with the word, right? And say, we

40:14

know that conversation and the art

40:16

of talking to one another is an important thing. So

40:19

I would love to see more of our technology

40:21

head in that direction, rather than it being this

40:23

kind of screens in front of us at all

40:25

times head down in a phone. We

40:28

think that by bringing the ability to talk to

40:30

and type to if necessary, tech,

40:33

it lifts us out of that trap and

40:36

lifts us out of that doom scrolling mentality. Oh,

40:39

that's a great thing. So for those who've

40:41

called Apple in the last six, seven years,

40:44

you'd be familiar with that kind of conversational

40:46

AI where it's like, I'm a fully trained

40:48

voice specialist that can help you. What's wrong?

40:50

And then you tell it, my computer won't

40:52

boot up. And it's like, oh, it sounds

40:54

like you need technical support. Hang on. I'll

40:56

put you right through that kind of thing.

40:59

Well, so that's the, I think what we would see as

41:01

conversational tech, like 1.0 is from natural

41:05

language processing and being able to

41:07

understand natural language understanding. So

41:10

that is the press one for this. And it

41:12

knows at least what you said. But the difference

41:14

now we see is with these language models is

41:16

that you could say the entire thing of what's

41:18

wrong with your Mac book. And

41:21

if Apple chose or other manufacturers chose to

41:23

put these systems in place, it

41:25

could do what chat GPT could do, which is

41:27

take all of that information from you directly and

41:29

actually give you the answer. It would probably work

41:32

out pretty well because it's the same

41:34

technology. It's the same technology. It's just being spoken to

41:36

you rather than showing you as a line of text

41:38

on screen. And we're seeing this now if you use

41:40

the chat GPT app on your phone, you can talk

41:42

to it and it talks back to you in a

41:44

variety of different voices. And you can always flick away

41:47

and look at the screen. But actually

41:49

that experience of talking to these things with

41:51

your voice is actually quite delightful and

41:54

certainly feels far more human even though

41:56

it's not than typing

41:58

into a screen or into a text box. Yeah.

42:01

All right. I want to talk about because

42:03

you work with Ecclesia,

42:07

Ecclesii. How do you pronounce it? Ecclesii.

42:10

Ecclesii. I think the pronunciation we got

42:12

in was about, you know, sing it written down is different. It's

42:15

a think tank that is starting to

42:17

process or that is processing the theological

42:19

issues associated with AI. What

42:22

is, if you were to look at

42:24

an agenda of what consumes your time

42:26

and Ecclesii's time,

42:28

what are the top two or

42:31

three things on your agenda right now? What

42:33

are you working on? So

42:35

I think, Ecclesii, we founded the think tank at the end

42:37

of 2023 as we began to see the

42:41

emergence of the church really responding

42:43

to the AI revolution and particularly

42:45

for church leaders in all contexts,

42:48

whether that's in ministry, in

42:51

parachurch organizations, in non-profits, but Christian

42:53

leaders looking for guidance on how

42:55

do I wrestle with some of

42:57

these big topics that are arising.

42:59

And broadly speaking, we've rallied around three big

43:02

areas. One is to have a better theology

43:04

of AI because it does raise some existential

43:06

questions. Maybe we can get into those. I know you've done a

43:09

lot on that in the past. But

43:11

then the practical and the pastoral implications of

43:13

the other two real areas. So

43:16

practically, what does it mean to have an AI

43:18

policy for your church or for your charity? What

43:20

does it mean to have a code of ethics? How

43:23

will you physically go around choosing which tools to

43:25

use and which ones that sit outside of things?

43:28

And then pastorally, using that as a kind

43:30

of broad catchall, is how you

43:32

deal with the issues that arise from AI

43:35

in ministry or in society. And I always use

43:37

this example, which I know is maybe a bit

43:39

trite, but when a young person comes to a

43:41

youth worker and says, hey, I've fallen

43:44

in love with this chatbot. What do

43:46

I do? How does a

43:48

youth worker or a minister in that context

43:50

respond to that question? What do

43:52

they actually say? Do they acknowledge the feelings which

43:54

may be very real for that young person who

43:56

feels like they truly have a relationship with these

43:59

things? In recent

44:01

months of people sadly committing suicide when

44:03

these bots break up with them or

44:05

that they have tried to marry them,

44:07

that we've seen both the good

44:09

and the bad of these things and that's already now, let

44:12

alone 24 months from now when this stuff gets really smart.

44:15

So how do we contend with these issues? This

44:17

is the stuff we're trying to wrestle with because

44:20

even if you choose not to, quote

44:22

unquote, use AI in your ministry, your

44:25

congregations, those that you're trying to seek to

44:27

reach and be impacted and certainly on the

44:29

mission field, they will be using it. There's

44:31

no way that we're putting this back in

44:33

the box. And so what we really want

44:36

to see at Ecclesiast is the church be

44:38

prepared and equipped to respond to that situation

44:40

and to be able to roll with these things

44:43

as they emerge, not be caught on the back

44:45

foot as we've so often seen it be the

44:47

way in the past when it comes to technology

44:49

and the church where we're often bad at it

44:51

or often late or often both. And

44:54

I think we could do a better job of it this time around.

44:57

I really appreciate it. And that example of

44:59

somebody falling in love with an AI or

45:01

a chatbot has been used before on this

45:03

podcast and it sounds

45:05

like science fiction, but it's not

45:07

science fiction. Yeah. So I

45:10

mean, one of the things that we see

45:12

and I saw this at a hackathon that

45:14

I was at recently where there was an

45:16

example of a church organization that was trying

45:18

to create a technology using CCTV and

45:20

AI and it could do face tracking to monitor

45:22

attendance. Now there's probably a bunch of good reasons

45:24

why you might want to do that, but

45:27

there's also a lot of challenges with it as well,

45:29

right? Totally doable with

45:31

existing technology like plug together a few

45:33

different tools. Totally doable right now. Should

45:36

it be done? I think real

45:38

problematic use case because

45:41

the minute you start tracking who's there, who's

45:43

not, who's to stop someone calling them up

45:45

and saying, Hey, your giving looks like it's

45:47

dropped this past month and you've not been

45:49

at church three times or worse. Hey,

45:52

I saw you sit next to the same woman three

45:54

times in the past month in church and she's not

45:56

your wife and your wife wasn't there. Is there something

45:58

going on there? Like these are... Again,

46:01

it's not about the AI being

46:03

nefarious. It's AI in the hands

46:06

of someone that might use it in the wrong way that is

46:08

the problem. These are real-time

46:10

things and we need the church to be not

46:12

just going, oh, isn't this exciting? I

46:14

can do things that, well, I don't have to have someone

46:17

at the back with a clicker registering how many people showed

46:19

up this Sunday. I can just have an AI do it.

46:21

Well, where is that data going? What are you

46:23

going to do with that data? What are the

46:26

ways you're going to keep that person safe, let

46:28

alone the safeguarding implications and many other issues besides?

46:31

These types of things are already happening.

46:33

The technology is already there to use

46:36

it. I want us to use

46:38

this technology. I don't want to sound like I'm saying put the

46:40

brakes on, but you wouldn't get in

46:42

a car with a great accelerator with no brake pedal.

46:45

We need to have both. That's what we're advocating

46:47

for is we need to go

46:49

forward and try and find use cases for this stuff,

46:51

but also know when we need to apply the brakes

46:54

and make sure that we're driving on the right road. Well,

46:57

and I think that's a good point. I

46:59

mean, that product probably isn't at market right

47:01

now and hopefully not being developed, but

47:04

it probably will be and it probably will be

47:06

released by somebody down the road, whether it's out

47:08

of that source or a different source. And

47:10

then you need the wisdom to figure it out.

47:12

I was thinking when you were saying the unintended

47:15

and the intended consequences of AI are in your

47:17

church. Imagine a church in 2008, just

47:22

as Twitter was emerging, Facebook was starting to

47:24

escape out of campuses saying, we're not doing

47:26

social media. We're not doing, as we called

47:28

it then, Web 2.0. It's banned

47:30

in our church. We won't have an account if

47:33

that happened. And let's say you're still reaching people

47:35

in 2024, which is a whole other question. Everybody

47:39

in your church, you're dealing with

47:41

increased teen anxiety, depression, polarization,

47:44

extremism, all of those things have

47:46

happened to your church, even

47:49

though you're off social media. And

47:51

your argument is you can, and a surprising number,

47:53

I mean, Glue has done the studies on that

47:56

Glue and Varna, surprising number of people in

47:58

the church are opposed to AI. AI and

48:00

saying, we're not going to use it. Well, first of all,

48:02

you're probably already using it and don't realize it. Secondly, even

48:05

if you did that, that doesn't mean your church isn't

48:07

going to have a whole bunch of people, like meaningful

48:09

percentages in love with chatbots,

48:12

using technology for malevolent purposes, not knowing

48:14

ethically what to do as a Christian

48:16

about how to integrate it in their

48:18

businesses and in their home, what to

48:21

do with their teenagers, their marriage broke

48:23

up, um, over AI. Like

48:26

all that stuff is coming in here. It's all

48:28

coming. And it's all coming out. It's also coming,

48:30

you know, for the church leader that might be

48:32

listening, it's also coming into the actual delivery of

48:34

ministry as well. Right. Yeah. We, we get this

48:36

question all the time about, can I use chat

48:38

GBT to help me write a sermon? And

48:41

I know that we can because you, you and

48:43

many other works on building helpful tools to help

48:45

you do exactly that because it can't be done.

48:48

Um, but you know, should we be

48:51

asking it to write all of our prayers for a Sunday

48:54

and just reading them the base and you know,

48:56

should we be asking it to compose

48:58

worship music and using that without the

49:00

spirit being part of that process? Like

49:02

these are really thorny theological issues that

49:04

we have to try and unpick. And

49:07

I don't think most churches let

49:09

alone have time to think about this stuff

49:12

or the, or the knowledge or education to

49:14

do so. And nor should they, they're not

49:16

AI theologians sitting around thinking about this stuff,

49:18

we need to equip them with the, with

49:20

this knowledge. Um, but yeah,

49:22

it's very much starting at the baseline of do

49:24

people know what the tools are? Do

49:26

they know how they're being used? And

49:28

crucially, are we thinking critically about them

49:30

before we actually pick up and log

49:32

in before you click yes to the

49:34

terms and conditions again, without reading them.

49:37

Um, just because the thing looks like it's

49:39

going to save you two hours of an

49:41

afternoon, like these are the steps we need

49:43

people to take and to be aware that,

49:46

um, when we add technology to

49:48

anything, we give it the opportunity

49:50

for it to take over some of our decision-making,

49:52

which we might want, but we also give it the

49:54

opportunity to take over some of our mistakes, which we

49:56

don't want. And that's the thing that

49:59

we want to be conscious. of is that if you

50:01

wouldn't be willing to give this task to an assistant

50:04

without checking what they were doing, don't give it to

50:06

an AI to do it either. We

50:08

need to stay in the loop and help use

50:11

these tools responsibly, particularly in the ministry context

50:13

if we want trust in the church to

50:15

increase, to use your analogy

50:17

to that trendline rather than decrease as

50:19

it so often has done. So

50:21

what should the average church leader

50:25

and the average business leader, because we have a lot

50:27

of business leaders listening to... Yeah. So

50:30

what questions should they be asking? What should they

50:32

be paying attention to? What should

50:34

they be doing right now? Well,

50:37

the first thing goes for either if you're in an enterprise

50:39

context or you're in a small business or

50:41

if you're a church leader or in a

50:44

charity is do you even know what AI

50:46

is being used in your organization? If you

50:48

can't answer that question confidently,

50:51

then that's the first place to start. And that might

50:53

mean, yes, you need to do a survey and find

50:55

out how many people have signed up to a chat

50:57

TBT Plus with their private credit card and they're charging

51:00

it back. Or how many

51:02

of you have turned on an AI

51:04

module inside of your conferencing software without

51:06

looking at how that data is being

51:08

collected or stored? Do you

51:10

know how the tools that you're using

51:13

are using AI and what models are they using? Because

51:16

we've seen, as we've said, AI has shown up

51:18

now not just in a, hey, I bought an

51:20

AI product. It's showing up in all of the

51:22

technology we use from the mailbox provider that you

51:24

choose to things like Canva and Notion and Asana

51:27

and all of the things that we use to

51:29

keep our day-to-day projects going

51:31

in the digital ecosystem and many of those being

51:33

used, obviously, in the church as well. These

51:36

all are beginning to have AI tools added to

51:38

them. It may not be the headline. It may

51:40

be just a part of the stack. And

51:42

so the first thing to do is pay attention to that.

51:45

What's going on? Do you know what's being

51:47

used? The second is actually to use some of it and

51:49

crucially to pay for some of it, which I know

51:51

might seem counterintuitive. But what I mean

51:53

by that is use some of these tools to get

51:55

familiar with them. As I say, I don't think open

51:57

AI, what they're trying to do is evil. endorse

52:00

you, go play around with complexity and chat GPT,

52:02

but do it in a responsible way where you're

52:04

keeping things private that should stay private. Don't put

52:06

anything on the internet that you don't want on

52:08

the internet. It applies to AI tools as

52:10

well. And comply with all of

52:12

the usual things like GDPR and copper

52:15

compliance if you're in the US or whatever else

52:18

regulations apply at the time of listening

52:20

to this. Take

52:22

those regular precautions, but go and play around. And why

52:24

I say pay for it is because when you begin

52:26

to pay for these tools, they

52:28

are not using your time and attention as

52:31

the currency. They're using your money as

52:33

the currency in the interaction

52:35

with these models. And that means that more

52:38

often than not, you're not having any of

52:40

your data go to train future versions of

52:42

the model. You've got the

52:44

ability usually to delete what you've put in

52:46

and that you can remove it if you want to.

52:49

And that you will be getting enterprise level

52:51

safety and security at both ends when

52:53

you're sending data up and receiving it.

52:56

I did not know that. When you pay

52:58

for these things, you get those benefits. So

53:00

when you pay, you're under a stricter

53:03

level of privacy, security, etc. than when you're using

53:05

the free stuff. Did not know that. Just think

53:07

the difference between YouTube and YouTube Premium. What do

53:09

you get when you pay for YouTube Premium? You

53:12

don't get any ads. Because you're

53:14

no longer the product. I

53:16

happily pay for YouTube Premium in the past few years.

53:18

I do. That's all my nine I pay every month.

53:21

12.99 a month. Sign me up. But

53:24

it goes the same way. So when you pay

53:26

for Perplexity Plus or Pro, whatever they call it,

53:28

they're all called Pro Plus Max or Ultra, I

53:31

think is usually the way. But

53:33

whatever one you're paying for, the likelihood is

53:35

that the terms of service have changed and

53:37

you're no longer paying to train the model.

53:40

Your data is yours. So

53:42

again, you might not want to pay money to these people and

53:44

that's absolutely fine. But if you're going to use them, you're actually

53:46

better off paying for it most of the time. So

53:49

I would say first thing is know what's

53:51

going on. Second is try. Test it out.

53:54

And then the third thing is be transparent. Which

53:56

is if you're going to use these tools, whether

53:58

it's in the creation of... ministry resources, whether

54:00

it's in the administration of your business,

54:03

whether it's managing the data of your

54:05

people in whatever context, just

54:07

be upfront and transparent with the people that are

54:09

going to be affected by that. The best thing

54:11

you can do is publish ethics policy or a

54:13

transparency policy around AI, put it up on your

54:16

website in the same way you would do with

54:18

your privacy policy or other things, and

54:20

just state how you're using tools, what you're

54:22

using them for, crucially what you're not using

54:24

them for, and be upfront with

54:26

your congregation or your business about it because the

54:28

last thing that anyone wants is for there to be

54:31

some kind of sneaking suspicion that, hey, I think

54:33

the vicar or the pastor or the minister is

54:35

writing all of his statements with chat GPT and

54:38

it turns out to be true. You know,

54:40

at least if he's upfront about it then maybe

54:42

that's okay, but you don't

54:44

want to be found out and that's what

54:46

we want to make sure that people are

54:49

taken seriously. I think there's a lot of

54:51

fear around transparency

54:54

and I wonder if it's tied. I mean

54:56

you're working with the Church of England on

54:58

ethics on this. I just

55:01

read a case to, it might have been in the New

55:03

York Times, they have

55:05

an ethicist column and this pastor

55:07

served for 50 years, retired,

55:10

and someone in the church

55:12

just happenstance discovered another

55:14

sermon that sounded an awful lot like one

55:16

of his and then went back through his

55:18

work and discovered that they

55:21

were all ripped off, right down to the

55:23

stories that never happened to him. Like

55:25

told someone else's story as though he

55:27

had had that vacation or been to

55:29

that coffee shop, which to

55:32

me is unconscionable. The

55:34

ethicist was actually quite gentle saying, oh, let

55:37

this guy go into retirement. I'm like, no,

55:39

he lied. And yet

55:41

we've all used Google search for years,

55:43

right? I mean we use Google and

55:45

I mean... Anthosauruses

55:48

and concordances and assistant guides and

55:50

all these other things as well.

55:52

Biblical commentaries, it's all digitized now.

55:55

I still use chat DPT

55:57

for first drafts research. I've never found... a

55:59

good sermon to come out of AI yet.

56:02

That's just me. But sometimes if

56:04

I need something summarized or I want

56:06

ideas rebuilt or tested, I will

56:09

use it. So when it comes to

56:11

disclosure, like it's one thing to get up

56:13

there and say every Sunday, I used Google,

56:15

I read five commentaries, I went back to

56:17

the interlinear Greek, nobody does that. But I

56:19

have preached messages where I said, hey, a

56:21

lot of these ideas in the message, and

56:23

I'll give credit where they're due, came from

56:25

a sermon I heard from Tim Keller, or

56:28

Andy Stanley preached the series. I'm reteaching it right

56:30

now and it's got my own spin on it,

56:32

but you need to know these ideas came first

56:35

from Andy. I think that's a great example.

56:37

I think that's a great example. What

56:39

level of transparency would you give if

56:42

you weren't on the internet? That

56:45

should be the same level of transparency you

56:47

give with the internet, whether it's AI or

56:49

just Google. That's fine. That's what we're asking

56:52

for because it's

56:54

a good principle in general. I think it's a

56:56

Christian principle in general to be transparent about

56:59

these things, not necessarily to the

57:01

nth degree. Like you say, here's

57:03

my entire web search history that

57:05

went into this. That's not necessary. But

57:07

at least in the spirit of it is saying, hey,

57:09

I used... Or maybe it's once

57:12

a year or is part of your statement that's

57:14

just up on the website somewhere or on social

57:16

media that says, just so you know, these are

57:18

the tools that I regularly use to help with

57:20

my research and planning of this thing. It

57:23

hasn't happened to me yet where anything has

57:25

come out of Clodr, chat GPT or other

57:27

models where I'm like, oh, this

57:29

is like final draft 101. It's like, okay,

57:31

I have a few ideas. Let's get moving.

57:35

But I suppose the day is coming where

57:37

chat GPT will create a better sermon than

57:39

I could write or Clodr would.

57:42

I wonder... You're a good preacher. There's

57:44

a lot of like sermons that chat

57:46

GPT right now that's better than a

57:48

lot of sermons being written by humans.

57:50

Let's be honest. Perhaps.

57:52

And I guess you're at the point where

57:54

if I borrowed a John Orpburg sermon, I'm

57:56

just going to be like, hey, this started

57:59

with John Orpburg. or Berg, or,

58:02

and, you know, I put my spin on it,

58:04

or maybe you stand up one day and go,

58:06

you know what, chat GPT spit out this amazing

58:08

treatise on grace. And today

58:10

I'm going to share and a lot of

58:12

it was done via chat GPT. If it's

58:15

like 90% that and 10% you, I'm trying

58:17

to like, for example, AI helped write me

58:19

these questions for you. And I almost always

58:21

for the last year and a half, run

58:25

questions and guests through chat

58:27

GPT or Claude. And

58:29

then I sit down and I go, Oh, I never would

58:31

have thought to ask that. So it's

58:34

always a mix. It's somewhere between, sometimes

58:36

half. And sometimes it's like,

58:39

I kept one question or whatever.

58:42

So it's just the idea of transparency. And I

58:44

think you're right. Having a policy is really good.

58:46

Do you have a downloadable one? I know Kenny

58:49

Jang and church.tech and people like that are working

58:51

on that right now. I have not replicated one

58:53

because I would endorse the one that Kenny has.

58:55

So yeah, definitely go to church.tech and download his.

58:58

I was with him last week because we

59:00

were discussing exactly this and it's this is

59:02

the thing that I would recommend. So definitely

59:04

going if you don't follow Kenny, his AI

59:07

for church leaders, Facebook group is amazing. Church.tech

59:09

is great. So definitely go and

59:11

use that in the podcast. I recommend all his stuff.

59:14

But I would say that, you know, that's where we

59:17

do need to kind of, you know, we're getting more

59:19

of these practical guidelines, right? We're getting practical guidelines that

59:21

is beginning to emerge. I think the

59:23

theology of this stuff is still something we're going

59:25

to wrestle with for some time because,

59:28

you know, as these models emerge, what

59:30

happens when a great Christian theology based,

59:33

you know, model arrives that can write

59:35

sermons that was based on hopefully people

59:37

that opted into giving all of their

59:41

sermon content over. It

59:43

can create stuff. And then the question

59:45

is, well, where is, what is that? Is

59:50

that spoken into by God? Can

59:53

he use that stuff? And my answer

59:55

would be yes. I think God can use all things. That's

59:58

what we believe. But

1:00:01

particularly when it comes to teaching and

1:00:03

preaching, we have this very

1:00:06

specific call of ministry

1:00:08

onto that thing. And

1:00:10

we lump that in with the job of a church

1:00:12

leader. And I think

1:00:14

that's where we probably have gone a little bit

1:00:17

wrong along the way as we say the

1:00:19

church... For most church leaders, listening to this, if

1:00:22

that's your job, you probably are a staff member of

1:00:24

one. We often

1:00:26

think of the mega churches and the big places and that's

1:00:28

what a lot of the technology that's out there is catered

1:00:30

to. But for most people, certainly

1:00:32

here in the UK, the 17,000 churches across the Church

1:00:34

of England that I get to work with from time

1:00:37

to time, they're being led by

1:00:39

one person who doesn't have time to do this

1:00:41

stuff. And probably is trying to do all

1:00:43

aspects of leading a church and Kerry, I know you know

1:00:45

this firsthand, and not all of them are great

1:00:47

preachers. In fact, some of

1:00:49

them are really not great preachers at all. And

1:00:51

they would be the first to admit it because

1:00:53

they didn't get into leading ministry to be a

1:00:56

great preacher. They got into it, to

1:00:58

teach the poor or to be evangelists or to be

1:01:00

missioned. There's lots of different reasons. And

1:01:02

I think when

1:01:04

we think about these tools, there maybe is a

1:01:06

possibility that they could come along and really help

1:01:09

many people encounter the gospel in a way that they

1:01:11

weren't able to before through the voice of the person

1:01:14

in the local church who was speaking to them. But

1:01:16

we need to work through some of these things in terms

1:01:19

of like, what is our theology of that? And will we

1:01:21

accept that that can be part of the future of what

1:01:23

ministry looks like without necessarily going to

1:01:25

the nth degree of saying, we'll just stick chat

1:01:27

GPT with a voice assistant at the front of

1:01:29

church every Sunday and leave them to it. Well,

1:01:31

you bring your heart to it. You know, there's

1:01:33

a Canadian doctrine, I was just doing a quick

1:01:36

legal search from back in law school. It's a,

1:01:38

I don't think it's common law. So it could

1:01:40

be in England too. It's called passing off. And

1:01:42

it doesn't mean this, but in my head, I

1:01:44

always take that term and

1:01:46

apply it to church world. If I take,

1:01:49

you know, someone else's sermon or someone else's

1:01:51

book, like I've got William Uries here on

1:01:53

my desk, and I'm like reading

1:01:55

you chapter one and I pretend it's my

1:01:57

work. I'm passing it off as

1:02:00

though I did all the work, I did

1:02:02

all this stuff. And I think that's out

1:02:04

of bounds. I don't care whether it's analog

1:02:06

or digital. If you're pretending to do something

1:02:09

that you didn't do, on the other hand,

1:02:12

great books have great footnotes. And Tim

1:02:14

Keller joked that's been told on this

1:02:16

podcast once or twice, he was famous

1:02:18

for saying, well, first of all, almost

1:02:21

any random sermon you listen to by Tim Keller

1:02:23

probably has 25 to 50 references in

1:02:26

it. As C.S. Lewis says, as

1:02:28

Miroslav Volf says, as this person says, as

1:02:30

this person says, and you

1:02:32

don't think anything of it, you just think, wow,

1:02:34

that guy reads widely, which he does. And he

1:02:36

says, when I have time to write an original

1:02:38

sermon, that's how it goes. When I'm out of

1:02:40

time, I just quote C.S. Lewis. But he always

1:02:42

quoted C.S. Lewis. He never passed off. He would

1:02:45

say, as C.S. Lewis said, as C.S. Lewis said,

1:02:47

it was sort of a joke. But

1:02:49

one of the most profound thinkers of

1:02:51

this generation, Tim Keller, and he was

1:02:54

quoting people left, right, and center. So

1:02:56

if you're using chat GPT, throw your

1:02:58

heart into it, but just say,

1:03:00

hey, I had some help from this, or I had some

1:03:02

help here. But I think as a

1:03:04

pastor, you have to, and as a leader, I was

1:03:06

reading an article a friend sent me from

1:03:09

the Wall Street Journal, and it was

1:03:11

saying, what's happening to tech, Axios is

1:03:13

a media organization. And they're saying we're

1:03:16

humanizing the news at a very deep

1:03:18

level. They were founded in 2017

1:03:21

from people, I think, who started Politico, and

1:03:23

they're going to humanizing this at a very

1:03:25

deep level. I think when you bring your

1:03:27

humanity to the task, when I bring my

1:03:29

humanity to the show, when it's actually me

1:03:31

asking you the questions, and as you note,

1:03:33

like most guests do, how many

1:03:36

actual questions I sent you, did I actually use?

1:03:38

Very few. Because we're

1:03:40

having a great conversation.

1:03:42

But any agree, disagree? I'd

1:03:44

love your thoughts on that. Well,

1:03:47

no, so I agree. I think that

1:03:49

we all have to weigh

1:03:51

ourselves as to how much of my own

1:03:53

humanity do I want to bring to any

1:03:55

of these things that we do with AI,

1:03:57

whether that's preaching a sermon, whether that's writing

1:03:59

the annual. your address, whether it's writing a

1:04:02

birthday poem for a friend, whatever it

1:04:04

might be, or writing your... hopefully

1:04:06

you're not writing your wife's Christmas card,

1:04:08

kind of like a missive using AI.

1:04:11

But you might. You can

1:04:13

do all these things. We have to bring our humanity to and weigh

1:04:15

each of these things up individually. I mean, Kerry,

1:04:17

I'll turn it on you. How would you feel about

1:04:19

the idea of a Kerry bot in

1:04:21

the future? Long after you've kind of gone

1:04:23

to be with the Lord and

1:04:25

all of your sermon content falls over

1:04:27

into the public domain. Do you want

1:04:30

people interacting with Kerry 3.0 in the

1:04:32

future and downloading

1:04:35

future versions of sermons that you never wrote?

1:04:37

Write me a sermon on 1 Corinthians 13

1:04:39

with the theology of

1:04:42

C.S. Lewis and the style of Kerry Newhoff.

1:04:44

What would that look like? Yeah, I find

1:04:47

that very problematic. First of all, we were

1:04:49

talking about this as a team very recently.

1:04:51

I think most of the work I've done

1:04:53

in this life is going to disappear in a

1:04:55

generation. There's

1:04:57

very few people who get read beyond

1:05:00

their demise. But I think it...

1:05:02

Honestly, it's a great question, JP. Unless

1:05:06

things get upended very, very quickly,

1:05:08

I think the one thing

1:05:10

that has the potential to outlive you is a

1:05:12

book you write. Podcasts

1:05:15

come and go, blog posts come and

1:05:17

go, social media posts vaporize

1:05:20

with the dawn. I

1:05:22

think it's a book. And

1:05:25

what I like about a book when they're well

1:05:27

done is a book's taken you

1:05:30

anywhere from a lifetime to

1:05:32

multiple years to produce.

1:05:34

It's your best thinking on an issue. It's 30,

1:05:36

40, 50,

1:05:39

70,000 words on a dedicated subject.

1:05:42

And it makes a long-form argument that

1:05:45

you just can't make on

1:05:47

digital means. You can't do

1:05:49

it. And so, alternatively, if

1:05:53

you have a lecture series, you're a professor, it probably

1:05:56

has that level of thought to it in

1:05:58

lecture notes. But often, lecture in juror when

1:06:00

they get published. So I think if

1:06:02

you have a tome of work that

1:06:05

might be, by the time I'm gone, hopefully

1:06:07

I've written eight or nine books and that's

1:06:09

plenty for a lifetime. Maybe one of them

1:06:11

has a chance to be helpful beyond my

1:06:14

death, but the sermon chatbot

1:06:16

for Charles Spurgeon, I'm sure

1:06:19

is coming if it hasn't been invented yet.

1:06:21

But I think if you have legacy thinkers

1:06:23

or people who really helped shape the thought

1:06:26

of a generation that perhaps

1:06:28

a chatbot would be helpful. I

1:06:30

think for most of us, myself

1:06:32

included, it's like you made a contribution

1:06:34

in your lifetime, you did a good job and

1:06:37

most people tend, Gordon McDonald told me

1:06:39

years ago and he's written some amazing

1:06:41

books. I was asking him advice as

1:06:43

I stepped back from the teaching team

1:06:45

and like retired, retired at Connexus and

1:06:48

just became a congregant. He

1:06:50

said, Carrie, they forget you quickly. And

1:06:52

he was so accurate. And

1:06:54

that's humbling, but you just have to realize your

1:06:56

place in the culture. And so

1:06:59

yeah, I wouldn't have any doubt. I think

1:07:01

while I'm alive, we're working on a Carrie

1:07:03

bot so that people can ask

1:07:05

me questions and spit out, I have

1:07:07

thousands of articles and ideas

1:07:10

and published sources out there. We'll feed all

1:07:12

that into the GPT and

1:07:15

it'll pump out or LLM, it'll pump

1:07:17

out hopefully accurate answers. We haven't been

1:07:19

able to get it accurate enough yet,

1:07:21

but when we do get it,

1:07:23

I don't want to be responsible for saying things I

1:07:25

didn't mean and never intended to say. I

1:07:28

look forward to playing around with it. I might even write my

1:07:30

own seminar too using. That'd be

1:07:32

great. That'd be great. So I'm going to ask

1:07:34

you one or two more questions and then open

1:07:36

the floor to you. What do

1:07:39

you wish churches were doing with AI that they're not

1:07:41

doing right now? Wow.

1:07:43

I mean, there is a lot

1:07:45

just to kind of get off the starting blocks with,

1:07:47

which is again, we have

1:07:50

this wonderful opportunity, don't we, like in

1:07:52

church ministry to bring people

1:07:55

into relationship with Jesus and have them

1:07:57

grow and be nurtured and be discipled.

1:08:00

But the thing that often gets in the way

1:08:02

of that is just the day-to-day drudgery of managing

1:08:04

our money,

1:08:06

managing our teams, putting people on rotors,

1:08:08

all of this stuff. So one of

1:08:10

the big things I just wish churches

1:08:12

would be doing is seeing this massive

1:08:14

opportunity that's in front of them to

1:08:16

pick up a bunch of things that

1:08:18

might just make that task a little

1:08:21

bit easier this week, this month. Think

1:08:23

about, particularly in the post-digital age and

1:08:25

post-COVID, we're all wrestling with having to

1:08:27

basically compete for attention on social media,

1:08:29

to create content, to bring people through the

1:08:31

doors. It's the primary place

1:08:33

that people are spending time and money in terms

1:08:35

of evangelism and outreach. But most

1:08:38

churches don't have a church staff. As I

1:08:40

said before, most churches are one person with

1:08:43

an army of well-meaning volunteers. And

1:08:45

so they don't have teams that can help do

1:08:47

the social media posting or manage a schedule or

1:08:49

put together a rotor. And if

1:08:51

they are doing it, they're often having to do it in

1:08:54

their spare time. So I

1:08:56

think one of the big things is

1:08:58

that churches would adopt some of these

1:09:00

things just to get the basics done

1:09:02

and more of their well-earned time and

1:09:04

their congregation's resources on that, rather

1:09:06

than having to relearn how

1:09:08

to do another three social media posts in

1:09:10

Canva next Sunday. Robert Leonard So

1:09:12

you know what's fascinating? I've got a personal story on

1:09:14

that. We had some family over a couple of weeks

1:09:16

ago, and I have a niece who started her own

1:09:19

business. And she wasn't at the table,

1:09:21

but her mom was. And she was just saying

1:09:23

how overwhelming it was for my niece to

1:09:26

try to come up with social media. But she

1:09:28

wants to get the word out. And I'm like,

1:09:30

well, have you used AI? So we literally just

1:09:32

brought my laptop to the kitchen counter. And I

1:09:35

used a couple of AIs and just entered

1:09:37

a prompt that said, here's the name of

1:09:39

the salon. Here's the name of the person,

1:09:42

this location, this is specialization. Create

1:09:44

a social media plan for

1:09:46

the next 30 days on her Instagram

1:09:48

account. Literally, strategy,

1:09:51

exact text, everything,

1:09:53

image suggestions in

1:09:55

about 25 seconds. And my

1:09:57

sister-in-law started to cry. She's

1:10:00

like, this exists? I'm like,

1:10:02

absolutely. And that was the, well, I

1:10:04

think I was paid. I've done the same thing

1:10:06

with a teacher that I know, a friend of

1:10:08

mine, who's a teacher. She teaches fourth grade here

1:10:11

in the UK. And I showed

1:10:13

her how to transcribe a YouTube video, turn

1:10:15

it into a quick, short

1:10:17

story analogy for her class, and

1:10:19

then write a bunch of follow-up questions. And

1:10:23

that same reaction, just people's eyes light

1:10:25

up and like, this stuff is actually here. The

1:10:28

thing that I really want people to do is not just use it, but

1:10:30

know how to use it really well. Like, that's

1:10:32

where the real game changer is. So, yeah, I

1:10:34

don't know if you did this with your example,

1:10:37

with your friend, but, you know, telling it to

1:10:39

not just write social media policy, but tell it,

1:10:41

hey, you're going to behave like an absolute social

1:10:43

media expert. You know everything that you need to

1:10:45

know about how to post on this platform. You've

1:10:47

learned from all of the big influences. And

1:10:50

now this is all the information I want

1:10:52

you to go and produce for me. Those

1:10:54

little tweaks in the prompt do amazing things.

1:10:56

They still work. Okay, that's good to know.

1:10:58

They still work there. And these

1:11:00

models, they produce some fascinating results when you give

1:11:03

them a bit of personality to respond in before

1:11:05

you get started. So I think that's the other

1:11:07

thing I want people to do is use them

1:11:09

well. Well, there's

1:11:11

a lot of stuff we could get to, but we've

1:11:14

been an hour on this already. Anything

1:11:17

we haven't touched on that you'd like to touch

1:11:19

on? I think the main

1:11:21

thing I would say is that I

1:11:24

think that those are speaking to anyone that's

1:11:26

kind of working in technology or working in

1:11:28

leadership right now has a

1:11:30

responsibility when it comes to AI that they may not

1:11:32

think about, which is that

1:11:35

we are the last generation,

1:11:37

maybe the most anxious, we're

1:11:39

the last generation of

1:11:42

people, of leaders that will

1:11:44

remember what life was like before and

1:11:48

after this technology has

1:11:50

existed. Like my kids,

1:11:52

I've got at the time recording a

1:11:54

nine-year-old and a six-year-old, they

1:11:56

will not know what life is like to go

1:11:58

through school, to go into university. to

1:12:00

work in a workplace, to talk

1:12:03

to a social media thing and

1:12:06

not question whether or not is this a real person

1:12:08

or is this an AI? We're

1:12:11

the last generation to remember what that's like. And so

1:12:13

I think as leaders,

1:12:16

as technologists, as particularly

1:12:18

people working in ministry, we're

1:12:20

the last people to remember what that truly

1:12:22

disconnected experience was like. And we want to

1:12:25

try and look for what was the gold

1:12:27

in that that we want to bring into

1:12:29

the next generation and hold on to. And

1:12:32

what are the things that we want our AIs,

1:12:34

we want our technology to look like in the

1:12:36

future so that we end

1:12:38

up in that more positive view of

1:12:40

where that future curve might go rather

1:12:42

than the more negative. Because I think

1:12:45

we only get one shot at it.

1:12:48

We've seen so many times before that when this

1:12:50

technology or any technology gets out of the box,

1:12:52

there is no real putting it back in there. And

1:12:55

we're seeing obviously efforts at the moment to do that with

1:12:57

social media. We don't want to have to

1:12:59

go through the same thing with AI. And

1:13:01

I think that's our responsibility. It's all of

1:13:03

our responsibility. It comes back to the point

1:13:05

I made at the start, which is you

1:13:07

can wait for the government to regulate a

1:13:09

model or you can hope

1:13:12

that a benevolent AI council arises

1:13:14

out of the big seven companies

1:13:16

running the show or

1:13:19

you and your neighbors and your

1:13:21

children and the people that you work

1:13:23

with take collective responsibility to say, this

1:13:25

is what we want this to look

1:13:27

like in the future and

1:13:29

behave accordingly. And not just

1:13:31

give in to the easy thing,

1:13:34

not to give into the lazy thing

1:13:37

and not just give up more

1:13:39

utility for our privacy and end

1:13:41

up with unintended consequences. That's what

1:13:43

I really want people to try

1:13:45

and do. I think that is

1:13:47

a fantastic call to action for

1:13:49

older listeners. So I got

1:13:52

the internet when I was 31. I

1:13:54

got my first smartphone when I was 40 and

1:13:57

it was a blackberry. And I was so excited that I

1:13:59

could get emails on. It was primitive,

1:14:01

Philistine. I miss my black Yeah,

1:14:05

I know. Again, you're too young. But you

1:14:07

know what is interesting? And I've

1:14:09

often thought about pre-digital memory, but I was really

1:14:11

thinking about it. You know, I'm not young as

1:14:14

a leader anymore. And it's like, but

1:14:16

I do know what life was like before

1:14:19

we all got the internet, the prototype

1:14:21

of the internet, the dial up internet.

1:14:24

And I think, you

1:14:26

know, if you think about what it means to

1:14:28

be human, I have an interest in philosophy. It's

1:14:30

people like Voltaire. It's people like Sautre. It's people

1:14:33

like Kierkegaard and Nietzsche to some

1:14:35

extent, although he's pretty dystopian, who

1:14:40

really help us think... The most positive outlook.

1:14:42

Not the most positive outlook. And theologians who make

1:14:44

us think about what life is really about,

1:14:46

it would be wonderful in

1:14:49

these next few decades while those of us

1:14:51

who remember that are still alive, because we

1:14:53

won't be in 30, 40 years,

1:14:56

to know what it means to be

1:14:58

human and to really think about that.

1:15:01

Because I think you're right. Like civilization is about

1:15:03

to change fundamentally and

1:15:06

inalienably forever. And

1:15:09

maybe one of our contributions, because a lot of

1:15:11

older leaders feel useless. They feel like, oh, my

1:15:13

time is gone. I'm not a young leader anymore.

1:15:15

I think to be able to reflect

1:15:18

and prayerfully think through the theological, philosophical

1:15:20

dimensions of what was it like to be

1:15:23

the last generation to grow up with

1:15:25

a pre-digital memory and to interact

1:15:27

as human beings and

1:15:30

to have meaning in your

1:15:32

life. What does that look like? And

1:15:34

then the next generation will separate the wheat from the

1:15:36

chaff and figure out what it keeps and what

1:15:38

it discards. But that

1:15:40

is a tremendous help, I think, to people who

1:15:42

are older. And don't think you have to write

1:15:44

a book. Just get together with

1:15:47

some 25-year-olds and have conversations. That's

1:15:49

a beautiful call to action. I

1:15:52

think the thing that I hold onto is that

1:15:55

God could have chose to create anything.

1:16:00

have chosen to create us

1:16:02

as silicon-based intelligences, but he

1:16:04

chose to build us as carbon ones. And

1:16:07

I often have this phrase that I say a lot

1:16:09

which is that matter matters. We

1:16:11

are physical 3D living our

1:16:14

life out in the dimension of time for

1:16:16

a reason, because that's the way that God

1:16:18

intended it. And we don't

1:16:20

want to lose sight of that, that we get

1:16:22

a lot of these kind of digital fantasies that

1:16:24

live inside of the Silicon Valley bubble, thinking

1:16:26

that the future is us all being downloaded

1:16:29

onto a chip and having endless

1:16:31

life in that way. And that's just not

1:16:33

the model that we've been given. And I think we want

1:16:35

to hang on to that as best

1:16:37

as we can and be the story that

1:16:40

speaks into society that matter matters and that

1:16:42

we are built to be with one another

1:16:45

in an embodied way because that's... He

1:16:48

came to earth in bodily form for

1:16:50

a reason. He

1:16:53

didn't speak to us through a chatbot. Mm-hmm.

1:16:56

Fascinating. JP, people are going to want to

1:16:58

track with you. Where will they find you

1:17:00

online these days and any resources you want

1:17:02

to direct people to in particular? Yeah. Well,

1:17:05

I'm sure you'll find the links to the show notes, but

1:17:07

you'll find me at James Poulter at pretty much every social

1:17:09

media platform you need to. I write

1:17:12

mostly over on LinkedIn, if that's your place or if

1:17:14

you like to hang out. And

1:17:16

if you are interested, I

1:17:18

suppose, in getting more information

1:17:20

about how to follow along

1:17:22

with what we're doing at

1:17:24

Ecclesii, go to ecclesii.org. That's

1:17:26

A-E-C-L-E-S-I. That's the

1:17:29

Ecclesii.A-I-Bit.org. ecclesii.org.

1:17:31

And yeah, you'll get the resources, follow

1:17:33

the research, and engage in the conversation.

1:17:36

Fantastic. Thank you so much. Thank

1:17:38

you, Kerry. Man, I'm so glad

1:17:40

that we got to have this conversation.

1:17:42

Spent some time with JP recently in

1:17:45

London as well. He's a great leader

1:17:47

and I hope that was both practical

1:17:49

and kind of metaphysical for you.

1:17:51

I think that's what AI is

1:17:53

bringing to the table. Hey, next episode,

1:17:55

we've got Ken Blanchard and Randy Connelly.

1:17:57

Very excited to talk to the legend.

1:18:00

of the one-minute manager. We'll get the

1:18:02

backstory on that. Seagull management, the power

1:18:04

of simplicity and brevity, and a whole

1:18:06

lot more. And make sure you

1:18:09

check out today's partners. Did

1:18:11

you know I'm doing my first live

1:18:13

event? And if you haven't registered yet,

1:18:15

time is creeping up and

1:18:17

the spots are selling out fast.

1:18:19

So you can secure your spot

1:18:21

by going to the artofleadershiplive.com. That's

1:18:23

the artofleadershiplive.com. Join me for three

1:18:25

days in Dallas. I'm so excited

1:18:27

to do this. It's an intimate

1:18:29

event. I would love to have

1:18:32

you there. A long-time listener. I'm

1:18:34

talking to you. Go to theartofleadershiplive.com.

1:18:36

And our friends at 10 by

1:18:38

10 want to help you

1:18:40

figure out the impact that your

1:18:42

ministry is making with the next

1:18:44

generation. If you're a youth pastor,

1:18:47

fill out their free relational discipleship

1:18:49

inventory that will measure your youth

1:18:51

ministry's efforts. Go to 10 by

1:18:54

10.org/RDI today for your

1:18:57

free assessment. That's t-e-n-x-1-0.org/RDI

1:18:59

to learn

1:19:02

more. As I said, Ken Blanchard's

1:19:05

coming up. We've also got Priscilla

1:19:07

Schreyer, Rich Vilodis, Steve Cuss, Nikki

1:19:09

Gumbel, Max Locato is

1:19:11

coming back, Andy Stanley, Ed Stetser, and a

1:19:13

whole lot more. And T Wright, did I

1:19:16

mention him as well? And then I've

1:19:18

got something for you because you listened to the end. First

1:19:21

of all, if you enjoyed this, please leave a

1:19:23

rating and review. And secondly, make sure you check

1:19:25

out the Preaching Cheat Sheet. You know, over 10,000

1:19:27

leaders use it pretty much

1:19:29

every week to help them determine ahead of

1:19:31

time whether their message is going to land

1:19:33

or whether it needs a little more work.

1:19:35

So you can go to preachingcheatsheet.com, download your

1:19:37

free copy of it. The link is also

1:19:39

available in the show notes. And thank you

1:19:41

so much for listening, everybody. I appreciate you

1:19:43

so much. I have spent a lot of

1:19:45

time on the road and I got to tell you, I

1:19:48

have loved meeting so many of you. A lot

1:19:50

of you go back to episode one, which is

1:19:52

incredible because we've been at this almost a decade.

1:19:54

And man, you're giving me like real time feedback

1:19:57

on all the things that made a difference in

1:19:59

your life. and I appreciate you and I

1:20:01

appreciate that so much. I can't believe we get

1:20:03

to do this. All right. Well, I

1:20:05

hope today's episode helped you identify and break

1:20:08

a growth barrier you're facing. We'll catch you

1:20:10

next time on the podcast.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features