WEBVTT

1
00:00:00.480 --> 00:00:04.200
<v Intro>Morning and welcome. Thanks everyone on this cool,</v>

2
00:00:04.201 --> 00:00:06.090
frosty Adelaide morning.

3
00:00:06.780 --> 00:00:10.680
It is lovely to see all your faces here for what is going to be a very

4
00:00:10.681 --> 00:00:14.940
interesting and exciting conversation. So without further ado,

5
00:00:15.300 --> 00:00:17.580
I'd like to introduce Ellen Broad.

6
00:00:18.390 --> 00:00:20.850
One's a respect to the expert in data sharing,

7
00:00:20.940 --> 00:00:23.490
open data and artificial intelligence.

8
00:00:24.060 --> 00:00:27.960
Her forthcoming book made in made by humans.

9
00:00:27.990 --> 00:00:32.520
The AI condition is about designing AI ethically.

10
00:00:32.970 --> 00:00:37.650
So we should have some very interesting elements of the conversation today.

11
00:00:38.040 --> 00:00:38.880
And I look forward,

12
00:00:38.910 --> 00:00:43.260
the session will run for about two minutes and we'll have time for questions at

13
00:00:43.261 --> 00:00:47.520
the end. So please think about anything you'd like to ask Ellen.

14
00:00:47.521 --> 00:00:51.420
So welcome. I'd like to welcome Ellen up to the stage now,

15
00:00:51.660 --> 00:00:55.050
if you don't like to give her a round of applause [inaudible].

16
00:00:56.180 --> 00:00:59.840
<v Ellen Broad>Thank you very much for having me. I should say I feel right at home,</v>

17
00:00:59.841 --> 00:01:02.900
coming from Canberra, where it is frustrated every morning.

18
00:01:02.901 --> 00:01:04.520
So this is kind of perfect.

19
00:01:05.180 --> 00:01:09.500
So I'm going to talk about artificial

20
00:01:09.501 --> 00:01:10.430
intelligence.

21
00:01:11.630 --> 00:01:15.170
One of the trends we've seen in the last couple of years is a move from what I

22
00:01:15.171 --> 00:01:18.110
call low stakes artificial intelligence,

23
00:01:18.650 --> 00:01:22.700
making predictions about the films we might like to watch a

24
00:01:22.701 --> 00:01:23.660
Netflix,

25
00:01:24.080 --> 00:01:28.610
the books we might purchase on Amazon too high stakes artificial

26
00:01:28.611 --> 00:01:33.380
intelligence, which is making decisions about the kind of person we might be

27
00:01:34.820 --> 00:01:36.980
decisions about what beauty looks like.

28
00:01:37.010 --> 00:01:40.220
This was the first international beauty contest judged by artificial

29
00:01:40.221 --> 00:01:45.110
intelligence making decisions about things like your IQ

30
00:01:45.140 --> 00:01:48.440
predictions about the kind of job that you might have,

31
00:01:48.740 --> 00:01:52.010
the interactions that you might have in society.

32
00:01:52.011 --> 00:01:54.320
This is an Israeli company called face ception.

33
00:01:55.040 --> 00:01:59.870
We're also starting to see the use of artificial intelligence to monitor mental

34
00:01:59.871 --> 00:02:00.800
health issues.

35
00:02:00.801 --> 00:02:05.660
To try to tell us when we are experiencing the symptoms of

36
00:02:05.661 --> 00:02:07.130
depression and anxiety.

37
00:02:08.150 --> 00:02:12.080
Some of you might have seen the recent article in the guardian about professor

38
00:02:12.110 --> 00:02:16.940
Michael or Miquel Kosinski his work using machine learning to

39
00:02:16.941 --> 00:02:19.850
try to predict a person's sexuality.

40
00:02:20.150 --> 00:02:24.770
He also has developed applications of machine learning that

41
00:02:24.771 --> 00:02:28.100
purport to identify our political orientation.

42
00:02:29.331 --> 00:02:34.190
We're seeing the use of machine learning to absorb our social

43
00:02:34.191 --> 00:02:38.390
media interactions and purport to make accurate predictions about our

44
00:02:38.391 --> 00:02:39.380
personalities.

45
00:02:39.620 --> 00:02:44.060
So we're starting to see this moved from because you

46
00:02:44.061 --> 00:02:46.370
watched the Incredibles on Netflix.

47
00:02:46.371 --> 00:02:49.640
You might be interested in Kung Fu Panda two,

48
00:02:50.000 --> 00:02:52.910
based on the things that you say on Twitter.

49
00:02:53.450 --> 00:02:58.160
You are a conscientious agreeable person who is a strong team

50
00:02:58.161 --> 00:02:58.640
player.

51
00:02:58.640 --> 00:03:02.850
So this is the kind of trends that we're seeing before back to some of those

52
00:03:02.851 --> 00:03:03.301
examples.

53
00:03:03.301 --> 00:03:07.290
I want to talk about some of the words that I use in this presentation,

54
00:03:07.291 --> 00:03:11.670
but come up quite often and a really crucial one is algorithm.

55
00:03:12.630 --> 00:03:14.070
It's not a new word,

56
00:03:14.610 --> 00:03:18.870
but increasingly we hear it as something has divorced from humans,

57
00:03:18.871 --> 00:03:22.110
we hear things like the algorithm is making a decision.

58
00:03:22.890 --> 00:03:25.950
The algorithm is predicting our

59
00:03:27.120 --> 00:03:31.800
re-entry into society after being convicted of a criminal offense.

60
00:03:32.250 --> 00:03:35.600
The challenge with word like algorithm is it's what I have.

61
00:03:35.601 --> 00:03:40.410
I have borrowed from Marvin Minsky. He has this concept of suitcase words,

62
00:03:40.710 --> 00:03:44.160
words that have come to me in a wide variety of things.

63
00:03:44.280 --> 00:03:47.250
And algorithm has become one of these suitcase words.

64
00:03:47.550 --> 00:03:52.350
When sometimes we use the word algorithm where a much simpler word would

65
00:03:52.351 --> 00:03:54.150
suffice and be more accurate.

66
00:03:54.480 --> 00:03:58.860
Like we could say a statistical formula is calculating

67
00:03:59.130 --> 00:04:03.930
your credit risk or a checklist is determining whether

68
00:04:03.931 --> 00:04:08.400
you're at risk of re-offending. Some algorithms are not complicated at all.

69
00:04:08.730 --> 00:04:12.390
One of the word, one of the ways that we typically describe an algorithm,

70
00:04:12.391 --> 00:04:17.040
you hear this quite often in Ted talks in introductions is it's like a set of

71
00:04:17.041 --> 00:04:20.160
instructions. It's like a recipe or a knitting pattern.

72
00:04:20.720 --> 00:04:23.730
And I don't very high level. That is kind of what an algorithm is.

73
00:04:23.760 --> 00:04:28.350
That's why you can use words like checklist formula in itself

74
00:04:28.380 --> 00:04:31.140
place. But in computer science,

75
00:04:31.350 --> 00:04:34.200
when a computer scientist is talking about an algorithm,

76
00:04:34.410 --> 00:04:37.470
it typically has a more discreet meaning.

77
00:04:37.471 --> 00:04:40.770
An algorithm is typically a way of undertaking,

78
00:04:40.800 --> 00:04:45.210
a kind of common fundamental task, like ordering a list.

79
00:04:45.600 --> 00:04:50.190
And your computer program is laces together,

80
00:04:50.220 --> 00:04:53.790
algorithms that are undertaking these fundamental tasks.

81
00:04:54.210 --> 00:04:58.470
And that is what is your computer program. And this causes a lot of confusion.

82
00:04:58.471 --> 00:05:02.010
When we talk about algorithms in the context of artificial intelligence,

83
00:05:02.190 --> 00:05:05.220
because if you're a computer scientist and you're using your knitting pattern

84
00:05:05.221 --> 00:05:09.090
analogy, and an algorithm is not your knitting pattern,

85
00:05:09.330 --> 00:05:13.620
it's like your cable stitch or your must ditch.

86
00:05:14.190 --> 00:05:17.370
Your program is that overall set of instructions.

87
00:05:17.400 --> 00:05:20.760
That's going to end with a jumper at the end. And so this is where,

88
00:05:20.761 --> 00:05:24.660
when we start to say things like algorithms are making decisions,

89
00:05:24.990 --> 00:05:29.250
understanding the wide variety of things we could be talking about is really

90
00:05:29.251 --> 00:05:32.220
important. It could be a computer program program.

91
00:05:32.490 --> 00:05:35.760
It could be an Excel spreadsheet. It could be the checklist,

92
00:05:35.790 --> 00:05:39.300
a pilot users on an airplane to determine whether it's safe to fly.

93
00:05:39.330 --> 00:05:42.780
So always digging into well, what, what do you mean when you say algorithm?

94
00:05:42.781 --> 00:05:44.700
What does this look like is really important?

95
00:05:45.270 --> 00:05:47.970
The second concept that I'm going to talk a lot about is data.

96
00:05:48.450 --> 00:05:52.740
So at the moment when we talk about artificial intelligence

97
00:05:53.220 --> 00:05:58.100
quite often what we're talking about is machine learning which often not

98
00:05:58.101 --> 00:06:02.150
always, there are some applications using sparse data for machine learning,

99
00:06:02.151 --> 00:06:06.590
but most of the time we're talking about using massive quantities of data to

100
00:06:06.591 --> 00:06:10.190
make predictions, to learn trends, to make decisions.

101
00:06:10.820 --> 00:06:12.620
And quite often in our head,

102
00:06:12.621 --> 00:06:17.000
we have this idea of data as something homogenous, gushing,

103
00:06:17.120 --> 00:06:17.990
natural,

104
00:06:18.260 --> 00:06:22.280
not helped by phrases like data is the new oil.

105
00:06:22.970 --> 00:06:26.540
When in reality, if you're someone that works with data all the time,

106
00:06:26.570 --> 00:06:31.340
it can often be messy, incomplete, it can have gaps,

107
00:06:31.670 --> 00:06:35.900
it can be wholly unsuitable for the purpose to which you're trying to use it.

108
00:06:36.200 --> 00:06:41.060
It is a kind of a very human asset created by

109
00:06:41.061 --> 00:06:41.631
humans,

110
00:06:41.631 --> 00:06:46.430
whether they're using instruments like senses or whether they're

111
00:06:46.431 --> 00:06:48.590
collecting information from people,

112
00:06:48.620 --> 00:06:53.450
it has kind of the contours of the manner in which it

113
00:06:53.451 --> 00:06:56.360
was collected that affect what we can use it for.

114
00:06:56.360 --> 00:06:59.480
And it's really important to think about in some of the examples that are use.

115
00:07:00.140 --> 00:07:01.730
The third one is artificial intelligence,

116
00:07:01.880 --> 00:07:06.350
which is what gets everyone coming to presentations like this it's about AI.

117
00:07:06.560 --> 00:07:11.420
AI can mean lots of different things drones, robotics,

118
00:07:11.421 --> 00:07:14.780
machine learning, and natural language processing.

119
00:07:15.480 --> 00:07:20.150
I use it because it's like guy Steve, most of what I'm talking about though,

120
00:07:20.151 --> 00:07:24.110
here is machine learning. I wouldn't say that there's anything I'm

121
00:07:26.030 --> 00:07:29.180
superhuman about it because artificial intelligence,

122
00:07:29.181 --> 00:07:32.630
as I kind of keep coming back to throughout is made by humans.

123
00:07:33.590 --> 00:07:37.370
We make choices as developers,

124
00:07:37.670 --> 00:07:41.990
as engineers as your data scientists, as your data collectors,

125
00:07:42.020 --> 00:07:45.230
as your product owners, deciding how it is going to get applied,

126
00:07:45.500 --> 00:07:49.910
we decide what will be used as an input,

127
00:07:50.660 --> 00:07:55.550
what our applications could be used for and the trade-offs

128
00:07:55.670 --> 00:07:57.410
that we're prepared to take into account.

129
00:07:57.800 --> 00:08:02.720
And this has a real bearing on some of the applications that

130
00:08:02.721 --> 00:08:03.320
we develop.

131
00:08:03.320 --> 00:08:08.270
This is where issues of bias era inaccuracy come to

132
00:08:08.271 --> 00:08:09.104
light.

133
00:08:09.260 --> 00:08:14.150
The first international beauty contest that was judged by a

134
00:08:14.151 --> 00:08:18.830
robot identified 44 winners from entries from over 6,000

135
00:08:18.831 --> 00:08:21.230
people. And nearly all of them were white,

136
00:08:21.560 --> 00:08:26.120
even though the entrance that put forward photos of

137
00:08:26.121 --> 00:08:30.710
themselves to the competition included large populations of people from India

138
00:08:30.740 --> 00:08:31.573
and China,

139
00:08:31.910 --> 00:08:36.500
but the algorithms still learnt to correlate whiteness with beauty.

140
00:08:36.501 --> 00:08:38.690
There were 44 overall winners,

141
00:08:38.900 --> 00:08:42.770
a handful of those were Asian and one was dark skinned.

142
00:08:43.070 --> 00:08:47.030
And that was because the data that the algorithm was trained on,

143
00:08:47.060 --> 00:08:50.690
the data that the program was trained on to learn,

144
00:08:50.691 --> 00:08:55.440
to identify beauty didn't have a lot of images of minority

145
00:08:55.441 --> 00:08:56.760
faces to learn from.

146
00:08:56.761 --> 00:08:59.670
So this is not the people submitting their photos to the contest,

147
00:08:59.700 --> 00:09:04.140
but how it was originally trained to identify beauty.

148
00:09:04.380 --> 00:09:08.970
And as the chief science officer said to the media at the time when the results

149
00:09:08.971 --> 00:09:10.080
of this became apparent,

150
00:09:10.470 --> 00:09:15.330
they didn't teach the machine to correlate beauty

151
00:09:15.331 --> 00:09:16.410
with lighter skin.

152
00:09:16.710 --> 00:09:20.910
It was that in the absence of a lot of images of darker skin and diverse

153
00:09:20.911 --> 00:09:23.100
populations in their training data,

154
00:09:23.370 --> 00:09:27.030
it learned to correlate whiteness with beauty, regardless.

155
00:09:27.060 --> 00:09:29.160
That was something that it learned on its own.

156
00:09:29.490 --> 00:09:33.000
And that's kind of reflective of some of the trends that we see in our own

157
00:09:33.001 --> 00:09:33.990
society. You know,

158
00:09:34.050 --> 00:09:39.030
your majority demographics can influence the way that we

159
00:09:39.031 --> 00:09:40.350
think about minorities.

160
00:09:40.620 --> 00:09:45.510
And so when we put this data into programs into machines it learns to

161
00:09:45.511 --> 00:09:50.370
reflect those back at us face ception has been incredibly controversial

162
00:09:50.371 --> 00:09:53.880
because it is trying to make decisions and predictions about people that

163
00:09:53.881 --> 00:09:58.770
themselves are quite hard to create an objective set

164
00:09:58.771 --> 00:10:03.210
of criteria for what is the definition of an academic researcher?

165
00:10:04.080 --> 00:10:06.030
What is the definition of a terrorist?

166
00:10:06.031 --> 00:10:10.830
How do you teach them as a machine to match these

167
00:10:10.831 --> 00:10:15.240
kinds of identities when you end up

168
00:10:15.241 --> 00:10:19.350
inevitably moving into stereotypes of what some of these

169
00:10:19.860 --> 00:10:22.620
kinds of people might be? Similarly,

170
00:10:22.621 --> 00:10:27.360
things like high IQ when I Q itself is still a contested measure of

171
00:10:27.361 --> 00:10:28.200
intelligence,

172
00:10:28.560 --> 00:10:33.240
what does it mean when we embed it in machines and then treat the decisions made

173
00:10:33.241 --> 00:10:34.920
by those machines as objective,

174
00:10:35.940 --> 00:10:40.590
we're starting to see an increasing number of applications of

175
00:10:40.591 --> 00:10:43.380
machine learning in the health sector.

176
00:10:43.560 --> 00:10:46.830
There was actually a really interesting study by university of Sydney last week,

177
00:10:46.860 --> 00:10:48.180
looking at a lot of these,

178
00:10:48.540 --> 00:10:52.980
which can often confuse ordinary symptoms of stress.

179
00:10:52.981 --> 00:10:55.800
So not the instance of clinical depression,

180
00:10:56.100 --> 00:11:00.840
but kind of an ordinary encounter with stress in day-to-day

181
00:11:00.841 --> 00:11:05.640
life with a clinical condition and kind of confuse

182
00:11:05.641 --> 00:11:09.300
people as to whether what they're experiencing is just the highs and lows of

183
00:11:09.301 --> 00:11:10.170
everyday life,

184
00:11:10.380 --> 00:11:14.730
or actually a sign of a deeper disorder because the app in your phone can't

185
00:11:14.731 --> 00:11:19.320
really tell the difference and this the work done by

186
00:11:19.740 --> 00:11:20.371
McCall's Kosinski,

187
00:11:20.371 --> 00:11:25.140
which I looked talk a little bit about has itself been really controversial for

188
00:11:25.350 --> 00:11:28.890
the choices that he made about the data to train his

189
00:11:30.240 --> 00:11:32.580
it's called colloquially, the Gator,

190
00:11:33.270 --> 00:11:36.750
G a I D R D a R. You know,

191
00:11:36.751 --> 00:11:41.100
he trained it on images of people only between 20 and 45,

192
00:11:41.430 --> 00:11:46.350
all white and only used the idea of gay and straight as

193
00:11:47.190 --> 00:11:50.850
binary's true, then train a machine to identify sexual orientation.

194
00:11:50.851 --> 00:11:55.690
So it kind of tried to fashion its notion of sexuality on quite a homogenous

195
00:11:56.320 --> 00:11:59.530
dataset and apply magic sauce. You can all do it,

196
00:11:59.680 --> 00:12:03.610
make sure that you then untick afterwards so that it revokes access to your

197
00:12:03.611 --> 00:12:07.990
Twitter and Facebook prevention credentials, but apply magic sauce. For example,

198
00:12:07.991 --> 00:12:12.340
on my social media data has assumed there's a 67%

199
00:12:12.341 --> 00:12:16.150
probability that I mail a that I am an introvert,

200
00:12:16.330 --> 00:12:17.830
that I'm not a team player.

201
00:12:18.100 --> 00:12:21.820
And I think this is partly because I mainly tweet about artificial intelligence

202
00:12:21.821 --> 00:12:22.960
and technology issues,

203
00:12:23.290 --> 00:12:27.070
and it has learned to correlate that with the characteristics of a kind of

204
00:12:27.430 --> 00:12:32.290
stereotypical computer programmer. So these are the kinds of,

205
00:12:32.320 --> 00:12:35.980
this is the world that we're living in and ethics is becoming increasingly

206
00:12:35.981 --> 00:12:38.950
important. And sometimes kind of you hear

207
00:12:40.480 --> 00:12:43.540
computer program is developers talk about their job,

208
00:12:43.541 --> 00:12:46.780
not being to make decisions about humans, they're just writing code.

209
00:12:47.260 --> 00:12:49.210
And actually within the industry,

210
00:12:49.211 --> 00:12:53.470
there have been a number of incredibly articulate well-known scientists.

211
00:12:53.650 --> 00:12:55.960
Who've pushed back on this for decades.

212
00:12:56.290 --> 00:12:58.810
A really famous one was Karen spark Jones,

213
00:12:59.230 --> 00:13:03.910
who this was in an interview with the British computing society in the mid two

214
00:13:03.911 --> 00:13:06.820
thousands talked about to be a proper professional.

215
00:13:06.821 --> 00:13:09.010
You need to think about the context and motive,

216
00:13:09.040 --> 00:13:12.100
motivation and justifications for what you're doing.

217
00:13:12.580 --> 00:13:17.260
The point is that there is an interaction between the context in which

218
00:13:17.950 --> 00:13:20.650
the program that you're developing will operate.

219
00:13:20.950 --> 00:13:23.860
And the programming task that you being asked to do itself.

220
00:13:24.220 --> 00:13:28.930
She is one of the leads kind of pioneers in natural

221
00:13:28.931 --> 00:13:33.910
language processing her discoveries and her research,

222
00:13:33.911 --> 00:13:38.020
pioneered search engines kind of it was behind Alta Vista,

223
00:13:38.080 --> 00:13:42.070
one of the first kind of big search engines in the early two thousands.

224
00:13:43.060 --> 00:13:47.350
She is the, one of the recipients of the ADA Lovelace metal.

225
00:13:47.410 --> 00:13:51.700
The British computing society recognizes pioneers in computer science,

226
00:13:52.060 --> 00:13:55.240
ADA Lovelace, being the first computer programmer today.

227
00:13:55.280 --> 00:14:00.130
She is still the only woman to have won the ADA Lovelace medal for excellence in

228
00:14:00.131 --> 00:14:03.790
computer science, even though it's been awarded for almost 20 years.

229
00:14:04.030 --> 00:14:05.800
And I think there are,

230
00:14:05.801 --> 00:14:10.510
hopefully there are a great number of other women pioneers in

231
00:14:10.511 --> 00:14:13.300
computer science that I hope get recognized soon too.

232
00:14:14.080 --> 00:14:18.700
And that notion that actually the decisions a

233
00:14:18.730 --> 00:14:23.080
computer scientist makes a developer makes about the tools and

234
00:14:23.081 --> 00:14:27.880
applications that they're delivering. It can have real world consequences.

235
00:14:27.910 --> 00:14:32.380
If you fail to think about the implications of the services and

236
00:14:32.381 --> 00:14:36.280
applications you build, I really famous one just from a couple of years ago,

237
00:14:36.340 --> 00:14:40.750
it was James Liang who was the software engineer behind the

238
00:14:40.780 --> 00:14:45.340
Volkswagen emissions avoiding software. I'm not sure,

239
00:14:45.640 --> 00:14:48.010
probably a lot of you remember the media stories.

240
00:14:48.011 --> 00:14:52.820
It was that full years Volkswagen had installed software in its

241
00:14:52.821 --> 00:14:56.300
vehicles that could detect the difference between an emissions test being

242
00:14:56.301 --> 00:15:00.110
undertaken on a vehicle and real world driving conditions.

243
00:15:00.380 --> 00:15:05.150
And the software would mask the emissions in real well in those testing

244
00:15:05.151 --> 00:15:09.650
scenarios so that the vehicles appeared to be emissions levels

245
00:15:09.651 --> 00:15:12.200
compliant. And in his defense,

246
00:15:12.350 --> 00:15:17.270
James Liang said I was just doing what my bosses told

247
00:15:17.271 --> 00:15:19.070
me to do. The,

248
00:15:19.071 --> 00:15:23.510
his defense lawyer was quoted as describing Liang as having

249
00:15:23.511 --> 00:15:26.480
executed a misguided loyalty to his employer.

250
00:15:26.990 --> 00:15:31.850
He was still the first person who was convicted in relation to the

251
00:15:31.851 --> 00:15:36.410
diesel cheating case. He still one of the only ones to be convicted,

252
00:15:36.411 --> 00:15:40.010
which I think says something about his relative lack of seniority that he's been

253
00:15:40.011 --> 00:15:42.110
unable to avoid prosecution.

254
00:15:43.010 --> 00:15:46.400
But it shows you the consequences in,

255
00:15:46.401 --> 00:15:50.300
in significant instances of failing to kind of think about

256
00:15:51.080 --> 00:15:55.850
what the implications are of the tools that you're being asked to build.

257
00:15:55.851 --> 00:16:00.080
And this comes back to things like when we talk about building AI to detect a

258
00:16:00.520 --> 00:16:01.880
person's sexual orientation,

259
00:16:02.360 --> 00:16:07.130
failing to think about in what context would that matter to decision-makers

260
00:16:07.400 --> 00:16:12.110
why do you need to know a person's sexual orientation?

261
00:16:12.140 --> 00:16:15.440
When would you desire having this in place?

262
00:16:15.740 --> 00:16:20.630
And you inevitably end up thinking about perhaps countries in which this

263
00:16:20.631 --> 00:16:25.310
is not something that's legal and accepted.

264
00:16:25.790 --> 00:16:29.390
It forces you to think about the consequences of what you're doing.

265
00:16:29.780 --> 00:16:34.010
And the AI sexuality one was a particularly glaring example,

266
00:16:34.040 --> 00:16:36.710
because while we talk about the ethics of that, you know, what,

267
00:16:36.770 --> 00:16:40.310
why do you build it? What do you think people are going to do with it?

268
00:16:41.090 --> 00:16:45.170
But that was an example where they used physionomy

269
00:16:45.350 --> 00:16:46.430
justifications.

270
00:16:46.670 --> 00:16:51.380
They used the same theories that were

271
00:16:51.381 --> 00:16:55.940
used a century ago to assess women's faces and determine whether they would be a

272
00:16:55.941 --> 00:17:00.170
good mother based on the shape of their face to determine whether

273
00:17:01.160 --> 00:17:03.680
dark-skinned people had low or high intelligence.

274
00:17:04.610 --> 00:17:09.260
And while they said that they disavowed physionomy in the paper,

275
00:17:09.440 --> 00:17:10.273
I'm really were like,

276
00:17:10.340 --> 00:17:14.750
we are not following physionomy in that original study.

277
00:17:14.850 --> 00:17:16.250
And in the machine learning example,

278
00:17:16.251 --> 00:17:21.140
by Makala Kosinski and his fellow researchers they still

279
00:17:21.141 --> 00:17:25.730
kept coming back to seeing correlations between differences in facial

280
00:17:25.731 --> 00:17:30.290
structure and theories of kind of innate

281
00:17:30.320 --> 00:17:34.760
behavior. So the study is littered with terms like

282
00:17:37.370 --> 00:17:41.920
straight men who are more masculine and assertive gay men are more

283
00:17:41.930 --> 00:17:42.763
feminine.

284
00:17:43.190 --> 00:17:47.690
This is related to the level of hormones they're exposed to in the womb.

285
00:17:48.380 --> 00:17:53.220
But what these Google researchers proved was without even using machine

286
00:17:53.221 --> 00:17:56.820
learning without anything that's artificial intelligence involved,

287
00:17:57.240 --> 00:18:01.950
they could determine they could build a model to relatively

288
00:18:01.951 --> 00:18:05.130
similar levels of accuracy asking people,

289
00:18:05.131 --> 00:18:08.490
just superficial questions about their physical appearance.

290
00:18:08.700 --> 00:18:13.470
So asking them do they wear makeup? Do they have glasses?

291
00:18:13.830 --> 00:18:17.340
Do they have a beard and correlating that with their sexuality?

292
00:18:17.341 --> 00:18:21.180
Because all the algorithm was learning to do was identify

293
00:18:21.210 --> 00:18:25.590
superficial cosmetic trends in

294
00:18:25.591 --> 00:18:26.610
populations.

295
00:18:26.940 --> 00:18:31.380
And those trends in the images of people from their dating profiles, they were,

296
00:18:31.600 --> 00:18:33.450
these were images from their dating profiles.

297
00:18:33.870 --> 00:18:38.640
Did demonstrate a correlation between straight women wearing more

298
00:18:38.641 --> 00:18:42.450
makeup in their dating profiles. There was nothing innate about it.

299
00:18:42.510 --> 00:18:46.710
It was just picking up on kind of trends in dress,

300
00:18:46.980 --> 00:18:51.930
and it really disproved some of the underlying findings in that study that also

301
00:18:52.170 --> 00:18:56.310
talked about a lack of facial hair in gay men being about under exposure to

302
00:18:56.311 --> 00:18:59.070
hormones. And again, they just pointed out actually,

303
00:18:59.071 --> 00:19:04.050
when we asked a range of gay men whether they had a facial hair or not,

304
00:19:04.230 --> 00:19:08.520
it was just young men who didn't and because all of the faces in their studies

305
00:19:08.730 --> 00:19:12.960
were of young men. That's how they ended up with that kind of correlation.

306
00:19:12.961 --> 00:19:17.460
Not because it was reflective of any deeper kind of

307
00:19:18.660 --> 00:19:20.760
biological reason. It was just,

308
00:19:20.790 --> 00:19:23.550
they had a narrow sample of people that they were testing.

309
00:19:23.880 --> 00:19:27.360
So it kind of shows you the assumptions that we can make about data,

310
00:19:27.420 --> 00:19:32.370
the trade-offs that we make in terms of deciding what data is

311
00:19:32.371 --> 00:19:34.920
useful for the machine that we're building.

312
00:19:35.700 --> 00:19:40.320
Facial recognition has been another one that's been incredibly

313
00:19:40.710 --> 00:19:42.090
increasingly I should say,

314
00:19:42.091 --> 00:19:46.980
controversial joy while and Winnie who is also behind this

315
00:19:46.981 --> 00:19:49.590
spoken word poem, which I really encourage you to look at.

316
00:19:49.590 --> 00:19:52.890
I think it's brilliant. So she's a researcher with the MIT media lab.

317
00:19:52.920 --> 00:19:55.350
Who's done a lot of work in facial recognition.

318
00:19:56.040 --> 00:20:00.000
She became well-known internationally when she demonstrated that most commercial

319
00:20:00.001 --> 00:20:04.710
facial recognition algorithms could only identify her face when she held a white

320
00:20:04.711 --> 00:20:06.360
mask up over it.

321
00:20:06.930 --> 00:20:09.750
And this is a well-known challenge in facial recognition,

322
00:20:09.751 --> 00:20:14.370
which again comes back to you have minority and majority trends in a data set.

323
00:20:14.640 --> 00:20:18.570
So if the faces in your facial recognition, trading data,

324
00:20:18.630 --> 00:20:21.390
training data majority white,

325
00:20:21.780 --> 00:20:26.310
then inevitably your facial recognition software becomes better at recognizing

326
00:20:26.311 --> 00:20:30.090
light-skinned faces. It's just, it has more data to learn from,

327
00:20:30.091 --> 00:20:34.410
to pick up these kinds of differences. And so on,

328
00:20:34.560 --> 00:20:39.390
particularly on female dark-skinned faces is when facial

329
00:20:39.391 --> 00:20:43.740
recognition becomes least accurate. And in ain't IO a woman,

330
00:20:43.950 --> 00:20:48.280
she goes through faces of many famous black icons.

331
00:20:48.490 --> 00:20:50.230
Michelle Obama, Ida B,

332
00:20:50.231 --> 00:20:54.910
Wells the first black Congresswoman and Serina

333
00:20:54.911 --> 00:20:59.890
Williams and demonstrates the confusion that commercial facial recognition

334
00:20:59.891 --> 00:21:03.940
algorithms that are widely used Amazon's Microsoft's have

335
00:21:03.970 --> 00:21:06.780
recognizing diverse faces. And again,

336
00:21:06.781 --> 00:21:10.560
this is this is a high challenge to solve,

337
00:21:10.590 --> 00:21:15.390
technically because we don't have lots of

338
00:21:15.391 --> 00:21:19.860
additional data to teach an algorithm to learn from in that kind of context.

339
00:21:19.861 --> 00:21:23.610
Your so in Australia, in the Australian context, for example,

340
00:21:23.611 --> 00:21:27.540
where we're building a national facial recognition database using driver's

341
00:21:27.541 --> 00:21:29.850
license photos from every state and territory,

342
00:21:30.090 --> 00:21:34.080
it's going to be faces that can be in your dataset,

343
00:21:34.230 --> 00:21:38.340
reflect the demographics in your population.

344
00:21:38.341 --> 00:21:43.200
And there's not much that you can do around that. So we,

345
00:21:43.201 --> 00:21:45.560
we always end up talking about ethics and ethics.

346
00:21:45.720 --> 00:21:50.520
It takes in AI is a huge topic right now. And so I wrote this book about it,

347
00:21:51.180 --> 00:21:54.210
which you should all go out and buy in two weeks when it actually comes out.

348
00:21:54.870 --> 00:21:59.730
Not yet obviously, but I ended up becoming more and more

349
00:22:00.330 --> 00:22:04.380
concerned around the focus on ethics in AI

350
00:22:06.000 --> 00:22:08.940
primum, non nocere, no Kere,

351
00:22:09.240 --> 00:22:12.720
someone in here probably speaks Latin and is going to school me on my

352
00:22:12.870 --> 00:22:17.790
pronunciation. I apologize is one of the most famous

353
00:22:17.940 --> 00:22:22.230
principles of ethics in the world. Does anyone know what it means? Yeah.

354
00:22:22.320 --> 00:22:25.380
Do no harm first agreement, no harm. And quite often at the moment,

355
00:22:25.381 --> 00:22:28.320
when we talk about ethics in computer science,

356
00:22:28.690 --> 00:22:32.970
we use the Hippocratic oath as our ideal. So we say,

357
00:22:32.971 --> 00:22:35.850
why don't computer scientists have a Hippocratic oath?

358
00:22:36.150 --> 00:22:40.050
There needs to be a Hippocratic oath equivalent for the tech industry.

359
00:22:40.680 --> 00:22:45.600
And this is just like made me quite uneasy because the Hippocratic oath

360
00:22:45.780 --> 00:22:47.880
is over a thousand years old.

361
00:22:48.510 --> 00:22:53.040
And if you were a physician in the 15th century, 16th century,

362
00:22:53.041 --> 00:22:56.100
15th century, I think is when it started being embedded in textbooks,

363
00:22:56.370 --> 00:22:58.680
but it's been around for centuries.

364
00:22:58.950 --> 00:23:03.750
And yet the medical sector is not without its history of

365
00:23:03.751 --> 00:23:06.180
profound, ethical scandals.

366
00:23:06.420 --> 00:23:11.190
We had the Tuskegee syphilis study in the 20th century, which was when

367
00:23:12.990 --> 00:23:17.100
researchers monitored natural progression of syphilis in a majority

368
00:23:17.220 --> 00:23:21.150
African-American sharecropper population for decades,

369
00:23:21.210 --> 00:23:26.010
even after a cure for syphilis became apparent because they wanted to understand

370
00:23:26.011 --> 00:23:29.040
the natural progression of the disease. So you had the medical profession,

371
00:23:29.340 --> 00:23:34.050
essentially allowing these people to become sicker and sicker and die in quite

372
00:23:34.051 --> 00:23:34.920
horrific ways.

373
00:23:36.270 --> 00:23:40.350
Using utilitarian ethics the great greatest,

374
00:23:40.351 --> 00:23:42.570
good for least harm, greatest good being.

375
00:23:42.571 --> 00:23:47.130
We now understand the progression of syphilis in,

376
00:23:47.131 --> 00:23:52.040
in a way that became one of the biggest medical ethics scandals of the 20th

377
00:23:52.041 --> 00:23:56.930
century. I think it was during Clinton's presidency that the American government

378
00:23:56.931 --> 00:24:01.190
eventually apologized to the victims of the Tuskegee syphilis

379
00:24:01.460 --> 00:24:05.990
study. And we actually see this playing out a lot in Australia.

380
00:24:05.991 --> 00:24:10.640
We have had a range of issues relating to

381
00:24:11.510 --> 00:24:16.280
the application of ethics in sectors where we quite often use greatest,

382
00:24:16.281 --> 00:24:20.510
good least harm to justify inflicting harm on what

383
00:24:20.541 --> 00:24:23.560
mostly vulnerable populations.

384
00:24:23.561 --> 00:24:25.700
So one of our most famous was Harry Bailey.

385
00:24:25.701 --> 00:24:29.630
I'm not sure if any of you remember the stories involving Harry Bailey and deep

386
00:24:29.631 --> 00:24:30.560
sleep therapy,

387
00:24:30.890 --> 00:24:35.360
but he experimented on patients with mental health issues putting them into a

388
00:24:35.361 --> 00:24:39.740
coma and then conducting electroconvulsive therapy in order to

389
00:24:39.741 --> 00:24:40.640
understand them

390
00:24:42.440 --> 00:24:46.400
kind of changes in the brain and a number of patients died under him.

391
00:24:46.820 --> 00:24:50.480
And it's like, these are the decisions that we make as humans about

392
00:24:51.980 --> 00:24:54.890
what is acceptable and unacceptable levels of harm.

393
00:24:54.891 --> 00:24:59.600
And this is in a sector that has had ethics as

394
00:24:59.930 --> 00:25:03.170
its cornerstone. For some time we still see it today.

395
00:25:03.171 --> 00:25:06.080
I've kind of been horrified by this unfolding story.

396
00:25:06.320 --> 00:25:11.270
And we still have these discussions in the medical industry at present.

397
00:25:11.300 --> 00:25:12.230
What are the,

398
00:25:12.560 --> 00:25:16.490
what are the differences in the ways that we treat Aboriginal and Torres Strait

399
00:25:16.491 --> 00:25:17.600
Islander patients?

400
00:25:17.720 --> 00:25:22.250
How do we perceive pain when expressed by different populations?

401
00:25:22.251 --> 00:25:26.780
We are human and we navigate ethics according to our own

402
00:25:26.781 --> 00:25:29.360
ideals as to what harm means,

403
00:25:29.630 --> 00:25:34.340
who can be exposed to harm what kinds of harm mean

404
00:25:34.341 --> 00:25:35.060
most?

405
00:25:35.060 --> 00:25:39.830
And this is why I kind of became quite uneasy about just looking at ethics in

406
00:25:39.831 --> 00:25:42.860
the context of artificial intelligence,

407
00:25:42.861 --> 00:25:47.480
because ultimately it's still humans designing these applications and we will

408
00:25:47.481 --> 00:25:49.490
weigh trade-offs,

409
00:25:49.790 --> 00:25:53.750
which is a huge part of how we design computer programs in general,

410
00:25:53.810 --> 00:25:57.800
let alone systems that will make predictions about humans. But we,

411
00:25:57.810 --> 00:26:00.170
we navigate these trade-offs all the time.

412
00:26:01.190 --> 00:26:05.870
And I don't actually think it's true that the technology

413
00:26:05.871 --> 00:26:09.140
sector is without ethics. For one thing,

414
00:26:09.200 --> 00:26:13.550
we love putting aspirational statements and statements of ethics on

415
00:26:13.760 --> 00:26:17.720
everything it's on walls. It's on post-it notes.

416
00:26:17.960 --> 00:26:21.770
One of the organizations that I worked in our meeting rooms were called

417
00:26:21.771 --> 00:26:25.610
integrity fearlessness,

418
00:26:26.150 --> 00:26:29.390
and I can't remember the third one, but you know, like we,

419
00:26:29.810 --> 00:26:32.990
we kind of brand ourselves with the statements.

420
00:26:32.991 --> 00:26:37.100
You've got the mission statements of our most famous companies. Like Google's,

421
00:26:37.101 --> 00:26:38.090
don't be evil.

422
00:26:38.720 --> 00:26:43.710
We like to uphold ourselves as a very ethical kind of organizations.

423
00:26:45.480 --> 00:26:50.250
Anyone who has worked in technology will notice this

424
00:26:50.251 --> 00:26:52.920
language wrapped around a lot of our biggest companies,

425
00:26:52.950 --> 00:26:57.720
but also that if you are in the industry and I look at myself in this context,

426
00:26:58.290 --> 00:27:02.820
a lot of people choose to work in genuinely mission-driven organizations,

427
00:27:02.821 --> 00:27:04.590
social impact startups.

428
00:27:06.530 --> 00:27:09.860
Non-Profits in government civic tech has real meaning.

429
00:27:09.890 --> 00:27:13.970
There are people who want to work on the hard business of making government work

430
00:27:14.000 --> 00:27:16.850
better using digital services and technology.

431
00:27:16.851 --> 00:27:20.600
And there are lots of people who are kind of driven by ethical ideal.

432
00:27:21.680 --> 00:27:25.850
There are also really famous statements of

433
00:27:26.750 --> 00:27:31.250
ethical principle yeah. In the industry that have been around for a long time.

434
00:27:31.580 --> 00:27:35.900
So a declaration of the independence of cyberspace is

435
00:27:36.380 --> 00:27:41.030
kind of, particularly if you were active in the industry kind of mid to late

436
00:27:41.240 --> 00:27:43.220
nineties, John Perry,

437
00:27:43.221 --> 00:27:47.870
Barlow is a declaration of the independence of cyberspace, huge.

438
00:27:48.950 --> 00:27:53.570
It started off kind of government. You have no sovereignty where we gather,

439
00:27:53.960 --> 00:27:58.190
he wrote it in response to one of the first pieces of legislation.

440
00:27:58.191 --> 00:28:00.140
The U S government was

441
00:28:02.450 --> 00:28:06.560
implementing that would curtail certain activities of internet service

442
00:28:06.561 --> 00:28:09.860
providers. And it was a first and foremost,

443
00:28:09.890 --> 00:28:14.150
a kind of separation of cyberspace from the physical world,

444
00:28:14.151 --> 00:28:15.740
trying to say, you know,

445
00:28:15.920 --> 00:28:20.810
the laws of your world do not apply in the world that we have built and

446
00:28:20.811 --> 00:28:22.850
it put forward its own statement of it.

447
00:28:23.390 --> 00:28:27.890
We believe that from ethics enlightened self-interest and the Commonweal

448
00:28:28.190 --> 00:28:29.870
out governance will emerge.

449
00:28:30.200 --> 00:28:35.060
And it looked at the golden rule that kind of is it do onto others,

450
00:28:35.061 --> 00:28:37.940
essentially what you would want done to you?

451
00:28:39.320 --> 00:28:42.140
As it's stands finding principal, and this was in the nineties,

452
00:28:42.141 --> 00:28:43.760
so quite a smart life,

453
00:28:43.940 --> 00:28:47.900
relatively a small community before the emergence of the big platforms that we

454
00:28:47.901 --> 00:28:48.800
have today.

455
00:28:49.940 --> 00:28:54.710
And so it was a very optimistic utopian idea of what the web would look like.

456
00:28:55.100 --> 00:28:56.630
And we've seen now,

457
00:28:56.780 --> 00:29:01.130
well today I think the conversation we're all having is, well,

458
00:29:01.131 --> 00:29:05.660
actually we need something more than ethics and I

459
00:29:05.720 --> 00:29:10.460
optimism that what is missing for a number of the kind of

460
00:29:10.461 --> 00:29:15.350
work that applications that we develop in the sectors that we work in is

461
00:29:15.351 --> 00:29:16.550
accountability.

462
00:29:17.720 --> 00:29:22.160
We're having these conversations at a very high level around the functions of

463
00:29:22.161 --> 00:29:25.730
platforms like Twitter, Facebook, Google,

464
00:29:25.731 --> 00:29:27.530
what role do they play now,

465
00:29:27.710 --> 00:29:31.070
media in our democracy and information we receive,

466
00:29:31.430 --> 00:29:34.100
but also at a micro level in terms of, well,

467
00:29:34.850 --> 00:29:39.740
what expectations do we have of organizations building AI for the medical

468
00:29:39.741 --> 00:29:44.200
sector, for example, or for the delivery of welfare?

469
00:29:44.380 --> 00:29:48.730
Like what, what practices should be expected?

470
00:29:49.180 --> 00:29:50.290
And it brings me back.

471
00:29:50.380 --> 00:29:55.330
It makes me think about how other sectors have emerged over the last

472
00:29:55.331 --> 00:29:56.164
century.

473
00:29:56.440 --> 00:30:01.300
So this was the pharmaceuticals sector at the turn of the

474
00:30:01.301 --> 00:30:02.440
20th century.

475
00:30:02.830 --> 00:30:06.910
One of the most famous was Lydia [inaudible] vegetable compound,

476
00:30:06.911 --> 00:30:10.870
which has cured more women and any other medicine in the world.

477
00:30:12.100 --> 00:30:16.750
And you had these kinds of incredible statements being made about the

478
00:30:16.751 --> 00:30:20.650
efficacy of medicines. It's,

479
00:30:20.680 --> 00:30:25.270
I'm pretty sure it holds the record for the greatest number of actual cures of

480
00:30:25.271 --> 00:30:29.650
women's ills. This was Lydia [inaudible] vegetable compound.

481
00:30:30.160 --> 00:30:34.690
The really famous one was one as safe kidney and liver cure.

482
00:30:35.020 --> 00:30:38.230
So remedy for Bright's disease, diabetes,

483
00:30:38.260 --> 00:30:41.620
and all kidney liver and urinary diseases.

484
00:30:41.650 --> 00:30:46.450
We saw these kinds of claims being made for all every kind of pharmaceutical

485
00:30:46.451 --> 00:30:49.060
that you could think of California syrup of figs,

486
00:30:49.090 --> 00:30:52.870
perfectly safe and natural fruit laxative. And these were

487
00:30:55.450 --> 00:31:00.400
like the dominant pharmaceuticals for households advertised in every major

488
00:31:00.401 --> 00:31:05.320
newspaper until Samuel Hopkins Adams published an

489
00:31:05.321 --> 00:31:09.910
incredibly detailed expos essay of what actually went

490
00:31:09.911 --> 00:31:13.030
into pharmaceuticals a hundred years ago.

491
00:31:13.390 --> 00:31:17.980
So Lydia [inaudible] vegetable compound was almost pure alcohol.

492
00:31:19.720 --> 00:31:22.000
One is safe, kidney and liver Cuba,

493
00:31:22.060 --> 00:31:25.960
as well as being a high percentage of alcohol contained glycerin,

494
00:31:25.990 --> 00:31:30.700
which is used in making soaps and ingredients typically used in

495
00:31:30.701 --> 00:31:35.650
making fireworks and fertilizes, the California syrup of Finks,

496
00:31:35.651 --> 00:31:38.290
which was marketed exclusively to babies,

497
00:31:38.291 --> 00:31:41.260
had more alcohol and a dose than a full strength beer.

498
00:31:42.940 --> 00:31:47.620
And this expo's a really kicked off scrutiny of the

499
00:31:47.621 --> 00:31:49.770
medical sector. And,

500
00:31:49.771 --> 00:31:53.470
and what I think is really telling is at the moment in AI,

501
00:31:53.471 --> 00:31:55.660
we're kind of in a similar situation,

502
00:31:55.661 --> 00:31:58.780
you can advertise a product and say anything you can say.

503
00:31:59.620 --> 00:32:03.580
My algorithm for facial recognition is better than humans at detecting and

504
00:32:03.581 --> 00:32:04.870
identifying faces.

505
00:32:05.590 --> 00:32:10.330
My employment algorithm is more accurate and less bias than any

506
00:32:10.331 --> 00:32:14.950
human panel will ever be without any need to demonstrate how you are,

507
00:32:15.250 --> 00:32:19.360
how you are deciding accuracy, what limitations your system has,

508
00:32:19.570 --> 00:32:22.960
what data it was trained on. It's all very opaque.

509
00:32:23.020 --> 00:32:27.070
There's no expectations around what

510
00:32:28.030 --> 00:32:32.800
we should be forced to explain about the systems that are being introduced and

511
00:32:32.801 --> 00:32:35.170
actually being used, make decisions,

512
00:32:36.040 --> 00:32:40.880
but it is starting to change a New York city council has been looking at

513
00:32:40.881 --> 00:32:45.530
legislation prescribing open algorithms and open programs where they're being

514
00:32:45.531 --> 00:32:48.320
used to make decisions about city services.

515
00:32:48.950 --> 00:32:51.650
They just currently have a task force looking at it,

516
00:32:51.680 --> 00:32:54.260
but there is starting to be discussion around, well,

517
00:32:54.261 --> 00:32:59.210
what expectations should we have to understand how a

518
00:32:59.211 --> 00:33:03.220
system works, but also the limitations and the same way that you pick up your

519
00:33:04.780 --> 00:33:08.380
prescription. And it will tell you its side effects its activity,

520
00:33:08.381 --> 00:33:11.320
the ingredients, and what it is, is a remedy for,

521
00:33:11.380 --> 00:33:14.530
what should we expect to be described about AI?

522
00:33:14.950 --> 00:33:18.850
We've also had K areas of legislation,

523
00:33:18.880 --> 00:33:23.290
the general data protection regulation being one introduced by the EU that start

524
00:33:23.291 --> 00:33:27.850
to put expectations around the transplant errancy

525
00:33:28.210 --> 00:33:33.040
of systems that are making decisions about people, sorry. Yes. On one hand,

526
00:33:33.041 --> 00:33:36.940
I'm like saying at some point we're going to need to have a conversation at a

527
00:33:37.090 --> 00:33:39.130
government and policy level about accountability,

528
00:33:39.820 --> 00:33:42.970
but there's also a lot that we shouldn't be doing as individuals.

529
00:33:43.180 --> 00:33:46.630
And the first is to, of course, be curious about that.

530
00:33:46.640 --> 00:33:49.600
The way an automated system works, it is not magic.

531
00:33:50.320 --> 00:33:55.300
These things are hard fi as a person implementing a system

532
00:33:55.301 --> 00:33:58.300
like this, you're making lots of decisions about, well,

533
00:33:58.301 --> 00:34:02.110
what data can I use as an input is the model that I'm using suitable for this

534
00:34:02.111 --> 00:34:04.630
purpose? How am I auditing its results?

535
00:34:05.770 --> 00:34:08.350
It's human engineering. So not, yeah.

536
00:34:08.500 --> 00:34:12.160
Accepting the decisions made by an automated system at face value.

537
00:34:12.161 --> 00:34:15.130
Being curious about that, the way they work is really important,

538
00:34:15.430 --> 00:34:18.640
particularly when it's about individuals.

539
00:34:19.870 --> 00:34:24.700
Our ability to make predictions that are accurate at an Indian visual

540
00:34:24.701 --> 00:34:29.140
level is still very hard in comparison with making

541
00:34:29.141 --> 00:34:31.030
predictions about populations,

542
00:34:31.360 --> 00:34:35.740
making general predictions about the way a whole kind of a community of people

543
00:34:35.741 --> 00:34:40.570
is moving is easier than then trying to narrow that

544
00:34:40.571 --> 00:34:43.870
down to a prediction about a person within that community.

545
00:34:43.871 --> 00:34:46.750
There are these outliers ask questions.

546
00:34:47.050 --> 00:34:50.800
If you don't understand the way a system works, ask questions,

547
00:34:51.100 --> 00:34:54.730
but most importantly, demand declination.

548
00:34:55.330 --> 00:34:59.020
This is an increasing field in AI. I'm happy to talk about it in the break.

549
00:34:59.021 --> 00:35:01.630
It is challenging whether we're talking about humans,

550
00:35:01.631 --> 00:35:04.780
being able to explain things to other humans or machines,

551
00:35:04.781 --> 00:35:06.670
being able to explain the way they make decisions,

552
00:35:06.940 --> 00:35:11.410
but these are the kind of where we should be treating these systems.

553
00:35:11.410 --> 00:35:15.820
The way that we treat systems being introduced in other sectors,

554
00:35:15.880 --> 00:35:18.910
pharmaceuticals, construction, aviation.

555
00:35:19.210 --> 00:35:22.720
We have a lot of expectations about what good practice this looks like,

556
00:35:23.020 --> 00:35:27.670
but we haven't quite articulated what that is for AI in the

557
00:35:27.671 --> 00:35:29.050
computer science industry.

558
00:35:30.031 --> 00:35:33.850
I just kind of wanted to end here because a lot of the systems that we're

559
00:35:33.851 --> 00:35:36.340
talking about in the AI space,

560
00:35:36.341 --> 00:35:41.130
when we say as a system can know your sexual orientation or

561
00:35:42.120 --> 00:35:45.150
know your suitability for a job,

562
00:35:45.180 --> 00:35:49.770
it does not know it is making a prediction based on probability.

563
00:35:50.100 --> 00:35:52.350
It is, it cannot see the future.

564
00:35:52.620 --> 00:35:57.420
It is using statistics to try and make an educated guess

565
00:35:57.421 --> 00:35:58.890
about what the future looks like.

566
00:35:58.920 --> 00:36:03.660
And I just want to take it back to Hannah from the human condition is that the

567
00:36:03.661 --> 00:36:06.720
new always happens against the overwhelming odds of statistical laws and their

568
00:36:06.721 --> 00:36:10.680
probability, which for all practical purposes amounts to certainty the new,

569
00:36:11.130 --> 00:36:13.200
therefore always appears in the guise of a miracle.

570
00:36:13.440 --> 00:36:17.640
And what I think we want to avoid is the practices of

571
00:36:17.641 --> 00:36:22.290
statistics being treated as straight jackets and use to make

572
00:36:22.291 --> 00:36:25.860
decisions and therefore cutting off the possibility of miracle altogether.

573
00:36:26.280 --> 00:36:26.910
So thank you.

