Full Q&A: Questioning ethics in the era of artificial intelligence

Full Q&A: Questioning ethics in the era of artificial intelligence (Pexels/Pixabay)

The era of artificial intelligence brings forth ethical intricacies, shedding light on the challenges and considerations that accompany the integration of technology in various domains. 

In his podcast, our CEO and founder, Blaise Hope, discussed the paraphernalia attached to the issue in a thought-provoking conversation with Nuzhat Jabinh, who boasts a diverse professional background in relevant fields.

Jabinh has a lot of experience in AI startups, including a stint as the Head of Communications for Headlight AI, a digital consultant for Trees for Cities, and a project manager for Lloyd’s of London

She has just completed a postgraduate course in New Neuroethics at the University of Oxford and has quite a few key achievements in terms of putting together global strategies – increasing online donations by 49 percent. 

Hope: I'm just looking through some of your stats. Basically, you're very good at the actual business end and the fundraising element here. Why don't I let you speak a bit more about yourself?

Jabinh: I think you've given me quite a long introduction, so thank you and pleasure to be on this. I look forward to your overview of ethics. I think it's an important area for all of us.

I think people tend to think that it's something [that] one part of an organization needs to take care of or that is relevant to an academic specialist. It's something that affects all our day-to-day lives. And that applies every bit as much to tech companies and to the kind of innovation that we're seeing now. 

Ethical issues of AI

Hope: Ethics and moral philosophy [are] concerned with the question of how people ought to act in the search for the definition of right conduct and the good life. 

Derived from the Greek word ‘ethos’, ethics covers the following dilemma: how to live a good life, our rights and responsibilities, the language of right and wrong moral decisions. What is good and bad? 

Philosophers nowadays tend to divide ethical theories into three areas: meta-ethics, normative ethics, and applied ethics. 

Meta-ethics deals with the nature of moral judgment, and the origins and meaning of ethical principles those guiding principles, obviously informing judgments that you make. 

Normative ethics is considered within the context of moral judgments and the criteria for what is right or wrong. 

The third one, applied ethics looks at controversial topics like war, animal rights and capital punishment, which I guess is the extrapolation of extreme circumstances.

Do you like that definition? 

Jabinh: I think one of the things that I'm always very aware of is that there were so many cultural differences. So I think in terms of drawing a kind of absolute baseline, I think there were a handful of things, but I actually think it's a very small number of things that are considered absolutely wrong.

So the extent to which, I think, very wooly. It changes across time, it changes across culture.

Hope:  Yeah, certainly. 

We've pulled out seven ethical issues that they believe revolve around AI. The first one here is, what if AI makes people lose their jobs? Second, what if AI learns the bad side of what we program it for? See killing people, corporal punishment — always hot topics will definitely talk a lot about this. 

What if AI goes rogue? Just like in Terminator, which my staffers love mentioning. Keeping control over AI is also another ‘what if’ moment, do they beat us? Should we treat AI as a machine or something more humane? And finally bias by AI.

What if AI makes people lose their jobs? A McKinsey report says about 800 million people could lose their jobs due to AI by 2030. This raises concern about whether a machine development should triumph further people's welfare. I have a hot take on this, but I want you to go first. 

Jabinh: I think the short answer is yes, people are going to lose their jobs.

I think, I don't mean to be alarmist, but I think it's a lot like the Industrial Revolution and while in terms of the analysis I know about, there were far more jobs now than there used to be in the past, and a whole variety of jobs that didn't used to exist. Of course, we still have the fallouts of the Industrial Revolution in the UK alone.

There were whole chunks of society that haven't recovered from the fact that manual labor isn't a huge sector anymore. So I think this is going to be a kind of version of that, possibly more intense because the speed of technology is so much faster. 

Hope: Yeah, absolutely. I think it happens so quickly. I think keeping people engaged is what's so crucial because it's gonna keep happening very fast. But by the same token, we will adapt to them very fast. And so what that means is I think you have to view AI as right. 

The Industrial Revolution happens, medical healthcare happens at the same time. The ability to mass produce and put out medical equipment, [and] medical technologies. It means themore people are alive now than ever. And a lot of them are employed comparatively speaking. 

And what it means is the jobs become slightly different. The way you apply yourself becomes slightly different and I think a good example is how they're putting AI for language generation

It just raises the bar for what a writer is expected to accomplish. If the AI can put together, can pull out the data and that just saves the writer a lot of time. They can go check it, but they can suddenly produce a lot more work, which is good for everyone. 

They're still applying themselves to the highest value thing that they can create that a computer can't. So it's a question of working in harmony.

And while initially, of course, let's say someone engages that, [and] some people will lose their jobs, yes, but they could also be doing their other job at sort of triple efficiency or something like that. So it is a question of how you interact with it. 

Jabinh: I think one of the problems for me is the kind of jobs people are doing. I don't mean to be arguing negatively here, it's just that I think that a lot of the work now is not intrinsically meaningful in the way that it might have been in the past.

So yes, there was more drudgery in the past. Yes. It was physically much more demanding at the same time I think there were whole sways of the lack of a better term, the Western sector, where people know that their job isn't really of any great value, and that's very difficult.

Hope: Well, that's very difficult because how do you as a grandfather or a father or a mother not to tell your grandchildren that “most of what I'm telling you isn't really gonna work [anymore].”

Cause being a plaster your whole life is just not something that you can reliably get. I can't guarantee you that's still gonna be a job. So I can't really advise you on how you should behave in this current market. 

It's like you should learn via your plaster, but be aware that there may be a new plaster or a new machine that's going to make that irrelevant soon enough.

And then if that does happen you need to be ready to learn the next thing. And I think maybe not enough people are doing that. I'm hoping that they do [it] for their own sake.

Jabinh: I think that's a really important point. I'm so in favor of education [and] access to high-quality education.

I don't think it's available for enough people by any stretch. People talk a lot about lifelong learning. If somebody doesn't have a good foundation when they're quite small, I think it makes that whole process more difficult, especially because for some people it feels like a kind of constant background pressure that they have to keep updating their skills. That there is probably going to be something that means they have to potentially have a radical career shift.

There are people that enjoy that challenge. I think that in terms of adaptation for the majority of people, I think it's slower. It's much slower. 

Hope: I think it's a brutal thing. You've gotta learn. 

Conforming to new ways

Hope: I studied magazine journalism when I was at university. I went to journalism school and I'd edited magazines [and] really run them. But in terms of where I wanted to go with my career, I knew that I'd [have] a lot more to learn. But I was worried that I wouldn't learn enough by doing, just broadcasting or just focusing on newspaper journalism.

So I left all of that. I did magazine journalism and I ended up working in a newsroom and learned all about newspaper journalism and then TV. I learned all about broadcast journalism just like that. And they're definitely transferrable skills.

What's actually happened is that was a brutal time and I did feel like I was quite behind, but now, I have the variety you're able to draw on, but it's hard to dangle that carrot in front of somebody. Go through a lot of pain and a lot of difficulty and a hard, tough time in terms of the commitment that's gonna be expected of you to your work.

It's very difficult to balance order [and] make sure you're having the income to learn a new skill and to be able to apply that in the future. But it is what you need to do. 

So I think education has to change. I think development within companies must change. You've gotta be constantly teaching your staff stuff, and especially when it comes to skills that are likely to be altered or replaced every few years. 

Jabinh: I think creativity is a huge factor as well, and it's something that I'm not convinced, certainly in this country, the current education system in any way nurtures.

I think that there's a kind of social pressure for people to conform and that actually stems a lot of people's natural creativity. 

Hope: There are enormous dangers in forcing people to conform. I think you must force people to. To try to push the bounds of what they can do as much as possible.

I think one of the things that I think education's really failed on is GPAs. Maybe that's not the case everywhere, but [with GPAs] you've effectively set down the rules of the game to get the points, to get the grade that it's gonna get you to the next stage.

But you're not learning to get to the next stage, you're learning to get a grade. And so what happens is you gain the system to get the results from it. Cause your result isn't purely evaluated by that. Whereas it's not evaluated by breadth, [it] doesn't evaluate adaptability cuz there are, those are much harder to evaluate.

So people aren't really being taught to do unless they teach themselves, which is great if you've decided to teach yourself, but if you perhaps naively, but not unreasonably trusted the system to prepare you, to teach you. It is quite a betrayal. 

Jabinh: And there's also a huge aspect in terms of inequality. So I'm thinking, again only of the UK, and in my lifetime, that has become increasingly unequal. So people's access to education, their kind of choices about it. All of that is hugely affected by where they are within society in the first place. Lest we think of it in terms of how it might affect education.

Imagine, for example, if we had the opportunity to have a really high-quality tutor because it was software that could, to huge numbers of people, that could be game-changing. So something that was actually interactive because it was AI.

And where it could be potentially tailored for different types of learning behavior, different types of children's personalities, and adults too. There might already be a company working on this.

Hope: What there is in the world we know is there's an AI that connects to a major company's customer service [phone] line. It runs your data based on pre-existing knowledge of who you've called to pay you, the best kind of person to give you, to satisfy you, and to address your concerns.

Jabinh: Exactly. So that could be, as you say, used as a model for the kind of tutor.

There was somebody I was speaking to recently who was telling me about a company, I think it was called Yen Pic, where what they're doing is similar in terms of customer service. They're filming people and they're using that film to then generate essentially a sales team.

So it can be a personalized, tailored message. And it means that that can be reused and it can be scaled to a degree that it couldn't be if it was a human team. 

Shielding AI from negativity

Hope: What if AI learns the bad side? Microsoft's chatbot Tay, for example, learned how to make racial slurs just one day after its launch on Twitter. 

Now the language thing, not to downplay it, but there's something more cultural there than deliberately malicious, if that makes sense. 

Jabinh: I wouldn't make that distinction. I think this is an area that fascinates me with its bias, which I know you have voiced with one of your points later on.

To me, this is a perfect example of that. So depending on what data set the machine learning is trained on, that is going to potentially give us exacerbated problems or amplified problems that we already have, that we know that we have. 

Hope: Would you say this issue focuses then on where AI thinks its being logical and helpful?

Jabinh: I wouldn't frame it like that. I would say specifically data bias. So the data set that we're using is so biased to begin with. 

I'm thinking particularly a lot of your listeners may have read this book. And if anyone hasn't, I recommend it highly. Invisible Women by Criado Perez. So that was a quite comprehensive look at essentially data bias in terms of gender, and it was staggering.

There were things in there that I hadn't realized were such a huge issue. Her point was that the data sets that were being used in a whole variety of areas were so biased to begin with  that the outcomes were almost inevitably sexist. 

There were companies that were using AI for example, for hiring, and because those systems had been trained on data sets that were almost completely male. So in terms of their top management, the system ended up making more sexist decisions than a human would have. So they stopped using it.

Hope: Can you remember an example?

Jabinh: I think it was Amazon and I think they stopped using it as a consequence. It might have been Microsoft. I forget which one of the big tech companies.

Hope: But not the company you wouldn't expect? 

Jabinh: No. On the contrary, I would say I would completely expect it because they have the resources to think, “okay, we want to use better software, we want to optimize this system.” And at the same time, they have the inherent data bias that is prevalent across the whole world.

I mean, one of the problems with it is getting the non-biased data set. So getting a data set that is actually valid means that people would have to put effort into that in the first place. They'd also have to spot at the beginning that their data set is biased and there is often nobody to actually do that within the organization. So it's anyone, they have the results later on. That this was a problem. 

Hope: Terrifying really, in a way. But how long does it take for these problems to be identified?

Jabinh: I don't know what the exact timescale was. My guess is it would've been within a couple of years, I'm guessing.

But then if you think of that in terms of the bit of ethics that particularly interests me, which has applied ethics. At that time, that means it would've affected her indecisions. There would've been people who probably didn't get hired, who didn't get promotions who may have been the best candidates. 

Hope: That is the great shame of trying to find candidates. It's so hard to interpret who's a good candidate because you need to know how to present yourself, but also how not to be, especially if you're coming cross-culturally, and how not to sort of trigger that sequence of decisions to think that you are pushing in a direction that you don't even realize. It's just kind of endlessly complex. People find each other.

Jabinh: Yes. It's complicated. At the same time, I recommend Kahnamen, so there is a brilliant outline of his system in John Lancaster’s book about money. And he has, I think, a six or seven-point scale. 

He has a completely evidence-based way of recruiting that boils down to working out very clearly what the person actually needs to be able to do. And I think that's one of the tricky things. 

A lot of the time with recruitment, people have a rough idea, but it's a bit like a design brief. So one of the difficult things is actually getting a good brief in the first place.

Hope: Good brief is gold. You can do anything with a good brief and nothing with a bad one, no matter how detailed it is. 

Warfare and responsibilities

Hope: We're just gonna start to touch on a hot topic issue on warfare. Killing people, corporal punishment. Is it okay to allow a computer to kill to make the decisions that lead to it? 

Several countries in the world have drones and pilotless planes equipped [with] weapons, so soldiers don't have to make the kill themselves. They can do it from trailers wherever around the world, and they’re only one step away from the actual trigger being turned over to AI too.

There are those who believe the lack of human touch in war will make wars even less humane and killing people with machines might make soldiers feel less empathy to the enemy. So I would like to just start by sort of saying something about allowing AI to sort of take over killing decisions on a battlefield.

So, In the military, everything is of maximum utility. Now, technical decisions will always be effectively the outcome of a commander trying to outfox another commander. You are making a guess as to what their actions are gonna be. 

And you do that with probabilities, but you also do that not knowing the variability of other factors such as the increasing Internal pressure that you are putting behind the scenes. Just a million different things that you would not ever be able to get the data for. Not even if you had access to the places, to the enemy, you wouldn't be able to get that data. 

And to quantify that I think immediately you have enormous issues where it not only becomes far less humane, it'll miss clear, easy to deceive within data. In terms of you can just hide amongst civilians

You'd have to make a calculus for dealing with that, which is incalculable based on who's a higher value loss and who isn't. I don’t know why you would ever do it. You know, from a military perspective, I think you've just got to be able to have that control.

‘Cause you wouldn't have any control over where it went. You wouldn't have any control over its outcome. You wouldn't have any more options as to how things can play. So that's just what I'm saying on that. Tell me what you think.

Jabinh: To me, this is a classic example of one of the aspects of the course that was most fascinating to me. A lot of the time, the technology doesn't change the ethics.

So the innovation, the leap forward in technology doesn't change the ethical questions. So to me, the much more important question is: was it right to go to war in the first place? My personal strong feeling is usually no, I don't think it's usually justified.

You mentioned killing civilians potentially, or those kinds of rankings, those kinds of things already happen. They've happened for so long. That kind of decision-making has already been going on. I don't think it's about optimizing that kind of decision-making. 

I think it's about asking the earlier question, which is: is this all the right thing to do in the first place? I think the answer is very rarely yes. 

Hope: I think you're right. I think it is very rarely yes. And I think there's something almost poisonous about them trying to retroactively apply decisions when you are within the war itself because it just leads to outcomes that are not 100 percent about the utility of what's going on, which seems to just drag things on and make them far worse for everybody and for a much wider group of people, for a much longer time. 

And that seems to me to be what has happened in the past 50 years. A loss. 

Jabinh: Yeah. Completely, completely.

Hope: And sort of it's led us to these strange circumstances where there are countries that just live under the shadow of 2 million drones. It's just madness because no one's put a soldier on the ground.

It's like such an incredibly, an incredible imbalance and utterly for political convenience, ‘cause it’s got nothing to do with military utility. It's clear, it has got nothing to do with it. You know, there's a reason wars are not run as assassination programs. It's about applying the use of force over a given territory. 

it's about that basically, or any other similar definition. I agree that it’s a great tragedy. If you're not there in the first place, that's the question. And then it's like, well, what decisions do you make to stop that happening? 

Jabinh: I think those are essentially political ones. So I feel very strongly in this country that despite all kinds of protests and objections and points where it was clear, the majority of the population didn't support war. The political issues at the time went ahead with it. So it's very hard to know how to prevent that. 

Hope: So let's just talk. We're talking about Afghanistan and Iraq. Everybody understood Afghanistan, and nobody understood Iraq. No, not really. And we kind of understood Syria. kind of, but yet we went into Libya, which sort of have no idea why we thought we had to–

Jabinh: We're arming Yemen in this country, still.

Hope: Yes. Then the interesting one is, [...] like we act sort of shocked that Crimea was taken. It's Russia's only year-round deep water port that doesn't freeze over. They weren't gonna let it go. I mean, they weren't, but you're putting pressure on something where it has to go.

It's because utility's gone out the window. It's now a question of what can you pose? What can you pretend? Which is almost so much worse than if you just let the generals figure it all out. 

Jabinh: I mean, I would want to dial back further than that. I would not be there to be generals. In terms of my biases, I'm deeply anti-war. So I think the technology is, again, doing what it usually does, which is exacerbating existing inequalities or existing problems. It's not really bringing up new ethical challenges, if you see what I mean.

Hope: Fair enough. 

I want to go sidetracked for a second. Can you tell me a bit about where your own philosophies come from, passed down, and developed? Would you describe yourself as a pacifist?

Jabinh: No, I think it's, I think it's problematic, so I wouldn't describe myself as a pacifist because I'm thinking purely on a personal, physical level, I believe in a right to self-defense. 

So if I start from that point then there are points when I think, “no, violence might be the right option.” I think, again, it becomes a problem with scale. So this may be a slightly naive perspective, but I think that there's a degree to which, if it was a one-to-one combat, If it's between roughly equal parties, there's a degree to which there is a sense, perhaps a full sense that there's some kind of integrity there.

And I think as soon as we had technology that was as sophisticated as a gun, we ran into the kinds of problems that you were talking about, where it becomes something that's physically very easy and somebody may die as a consequence. I can't see any benefit in a war. I, again,  can take the self-defense point.

So if there was a threat of invasion and people would then argue, how are you going to prevent threats of invasion? And my answer to that would be diplomacy. I would like to see a lot more diplomacy. 

Hope: I agree with you there. I also think joint economic interest is such an incredible [feat], has so been so successful at preventing war as to be, you know, just miraculous basically. And, similarly, political systems that offer at least some accountability that is obvious also kind of automatically allow there to be as an immediate tension lowerer. 

Jabinh: The EU is a great example of that to me.

Hope: Yeah, I think the EU is a good example of that, definitely.

What is interesting is that [...] I have many war-death in my family, in lots of different wars. And for lots of different reasons and lots of different functions. So the way that I've received that is not in only one instance directly from, you know, someone who was engaged in any way of that, because it didn't last very long.

And so I have a very positive view of what some of the attritions can be. I actually have no confusion about the horror of war, if that makes sense. I've also spent enough time in strange parts of the world that you see enough to understand what it sort of means and have a big problem. 

I've told you this before, with people taking up causes of faraway places they don't understand they are finally stable. And then they don't even know that the fact that they've done that for their own ego or to put something, as far as I can see, just to make themselves sound controversial, has led to the death of quite a few people there.

And I've sort of, I know, and I'm trying to be non-specific, but you know, I've not heard from people again, who are in towns where stuff happened because a bunch of Western journalism students couldn't stop tweeting. And it never needed to happen. There was never gonna be another outcome [...] ‘cause at the moment it's sort of a negotiated piece. 

But my view on things [is that] I have a lot of faith in actual, very high-end military systems because it is one of the very few things that has shown to actually temper the horrors of war when you're actually in place. Because I do believe that there's gonna be an application, but I do agree. I think you, you [need to] create a situation where it doesn't have to happen is really the ideal thing.

And as much as the EU, and I am Pro-EU, I voted for remain but I've accepted that that's not what happened with our country. I give credence to the arguments of those who wanted to leave. Not the sort of hashtag stuff, but the real arguments of disenfranchisement and so on. 

But I do think the EU was able to skirt that a lot, frankly, because the UK was willing to do things a lot more publicly. For a long time, which is very convenient for them, and I think they'll continue to be able to do that. But this comes down to what can we then use AI for to actually stop that happening.

Jabinh: It would be interesting to optimize diplomacy, wouldn’t it? I'm not aware of anybody studying this, but there must be somebody who is studying what actually works. 

Hope: Yeah, I think you can, I think you can find data sets as well on human behavior. 

Jabinh: There's good work on negotiation. I've read that. That's fascinating. 

Hope: Well, I think that would be the number one benefit is [to] figure that out because then you'd figure out if that's what they want is actually fair, and whether they're gonna say no no matter what. And that’s what the negotiation is [for].

Jabinh: I’m not optimistic though, I always think there's going to be points of agreement. There'll be something in a diagram that people overlap on and that you can then get some kind of turnaround.

Hope: If you know, they wanna make a show of saying no, and they're not gonna let any other outcome happen no matter what you say on one topic. You let them have it and then give them the one where you can, like, let's do both. Give them the public face and then the practical one. Interesting.

Okay. Just one final thing the lack of human touch will make soldiers feel less sympathy for their enemies. I'm not sure because people are getting plenty of PTSD from going to trailers and murdering people from drones. 

I'm not sure that there's credence to that, ‘cause I think they do know what they're doing. And I think, if the solution is to get a kid to do it so they don't answer conscious of it, then we’re so far

Jabinh: Not very good. it's like we're asking completely the wrong questions if that’s one of the answers.

Hope: Okay. What if AI goes rogue? This kind of comes back to [when] we were saying in the beginning about the AI that picked up racial slurs. And I guess what I was trying to get at is, does the AI pick up a reflection of horrors within a data set and push that out?

Does it unwittingly pick it and then expand on it, and then take it further and further and further? Or does it turn against us? I don’t know what we're talking about here. Are we just talking about it going rogue?

Jabinh: Essentially, all three scenarios are possible. So the classic one is Nick Bostrom's one about something which is designed to make paperclips that just go completely out of control. And that's its complete focus and we just end up inundated with paperclips. And there are lots of kind of obvious things people can say against that.

In terms of his actual argument, I think his argument is valid that, as you I said, all three of those possible scenarios could take place. 

Keeping control

Hope: Next one. [...] Look, keeping control over AI is another ‘what if’ moment. There is a feel that one day AI will surpass our intelligence. And this is kind of, I guess, where a lot of the sci-fi themes come from. Can you tell me a bit about [it], is that possible? You tell me the applications.

Jabinh: My viewing that as a straightforward yes. I think that as humans we're so in love with the idea of our own intelligence we tend to think of it as some kind of peak thing. 

It's not some kinda peak. I don’t know if you’ve read the fascinating book, Other Minds. It's about, not one I recommend [but] because it's just so enjoyable, but it was about octopus intelligence.

Hope: Okay. I think I know what you mean. 

Jabinh: Essentially, one of the reasons that humans have dominated the planet is that we live longer. So if we're talking about the measure of intelligence, the octopus could give us a good run for our money. They just have a very short lifespan. They would never be in a position to threaten us as a species, or not as far as we know now.

So the idea that our intelligence, as I said, is some kind of pinnacle, is something that we are very invested in as a species in thinking. Almost all our religious texts, those perspectives put us at the top of a kind of pile. Even in terms of evolution, there's a tendency to think of us as the kind of endpoint.

So the idea that something could overtake us in terms of intelligence, I find that very easy to believe. A friend of mine used to joke that the internet is already such a thing, that it would be impossible for one single human or even a collective of humans to switch that off now. And I think if you think of a simpler thing like electricity, Imagine trying to dial back to that technology.

Now imagine trying to say, “Okay, we're just not gonna use electricity anymore. We've decided it's a bad idea.” Even if we had a global collective agreement on that, how would we actually enforce it? 

So I think the idea that AI could surpass our intelligence fairly rapidly [is] entirely possible, but the idea that it can surpass our intelligence, I don't see any strong arguments against that.

Hope: Fair enough. I think there's the question: does it just simply add a complimentary tool to our intelligence? I think that is kind of the question.

Jabinh: it might be on that level now. So I think there's a good sense that yes, it's a complimentary tool. It's something Google emphasize a lot, that it's enhancing, it's like a kind of collaborator. It's going to make all your work better, a lot like the thing you said at the beginning about articles so that it would enable somebody to do better work. So a kind of optimization software essentially. 

The leap to me from that potentially going way past that, I don't feel that a big leap. I'm very aware of Moore's Law in this context.

I can think of a tech conference a couple of years ago where I was astonished at some of the things AI could do then. So things that I thought were completely feasible, I just didn't know they were already in the world. 

And I think that sensation was the kind of, the tired rising. I think that just that gets faster. I mean, taking a sci-fi kind of view, I'm in favor of the rights for robots. I don't think that it's something which is too [off].

Hope: So do you think robots should get rights? 

Jabinh: Absolutely.

Hope: Okay, then here's the other question is do we become cyborgs or do we just get replaced by robots? 

Jabinh: Interesting. There may not be a kind of hard line. I'm thinking of ‘Blade Runner’ inevitably in these kinds of scenarios. And it's one of the reasons why rights for robots appeal to me because I think it may be the case that it's not easy to tell where on that spectrum somebody is. 

Hope: Yeah. I do think that that is interesting. And I think we are going to integrate [that].

Jabinh: I mean, there's a kind of technology where somebody can operate, say a prosthetic arm. So essentially their thought is operating that.

Hope: Well, augmented reality is where we're going straight into that. We are already there actually, since we had smartphones, since we had laptops. It's just like we're extending our capabilities in our capacity to project to the world from within us by something that is not physically attached to us, but it will be. It can be, and it is as good as it is pretty much. 

So, I'm very much in that camp. I think this solution with how do I type properly on a phone? I need to use a keyboard, I need to do this becomes, well, how do I just get the words out, you know, onto the keypad in the first place? I think we're getting nearly there. I wonder where we'll be in 10 years.

Jabinh: I wouldn't want to make any predictions. I mean, if you think 10 minutes back, that's quite unnerving.

Hope: So exciting though. If you jump on Amazon and just look at the startups that they're pushing in just wonder, it's like, I bought a pair of Bluetooth headphones that lasted me two hours. Super comfortable and [...] I didn't have the charge of power back for days. Super slick, and synced immediately together. They paired [with my device] it was so easy to deal with. And now within five months later they [became] the standard. 

Bias by AI

Hope: Bias by AI is not something new. AI acts on data, you've touched this. The data is inputted by people who are biased or the data itself is biased. Then there's a big chance the AI will act biased as well.

We covered this pretty well. I mean, this is kind of where we began. This is how it comes out with racial slurs. This is how it comes out with awful decisions. It's all rather the same thing.

Jabinh: As usual, it points to social change in my mind. So I think that we can't expect the technology to essentially solve political problems, despite what I was saying about optimized diplomacy.

Essentially we have to fix those problems ourselves and whether they are problems. Depending on somebody's political views, they may or may not think these are an issue. So to my mind, they're huge injustices. They are issues of major inequality. They need to be addressed. That isn't something that there's a global consensus on. If there was, we would have a very different world. So I feel that's an area where we need to put a lot of energy.

Hope: Yeah, I agree completely. The same ethical questions above. We're also raised by the World Economic Forum in Forbes, and here's a quote from the European Parliamentary Research Service.

Just because the issues above persist doesn't mean the world now has stopped developing AI. In fact, companies are now using the ethical questions above to develop more advanced AI with tools to hinder further issues.

For example, the Future of Life Institute in the US is developing a protocol to ensure the development of AI is beneficial to humankind with a focus on safety and existential risk, autonomous weapons, arms races, human control of AI, and the potential dangers of advanced general/strong or super-intelligent AI. Many private companies are donors to this institute, including Tesla and Skype.

I feel a mixture of encouragement and fear from that. 

Jabinh: I think it depends on what they mean, and it's not the clearest statement to my mind. I'm still in favor of regulation, so I want there to be regulation saying that we want AI to be transparent, that we want safety to be an absolutely intrinsic thing in any kind of development. I'm not quite sure that's what that document is saying. 

Which goes back to your point you made earlier about the kind of press play response that people often give to questions about where people are concerned. So there's a thing about safety or this question about ethics, and there's a kind of standard, this is the answer.

And i not necessarily very well thought out, and it's as if they think it's a tick box.

Hope: They heard [what] their TA said in university and think that makes it an acceptable answer. And this is why the lack of study of AI basically, it's been used as a tool to get a grade. Therefore follow these steps, get a grade, agree with TA, get a grade, and then move on.

And then you hear people answer. When tech companies are in trouble, they give an ethical answer that sounds like that. And they're like, yeah, I got it right when no, you didn't. And that you think you did is what the problem is. You genuinely can't. It's quite worrying when you hear that. 

You want to believe that people are right, but it’s too risky to believe that fully. I almost would prefer an acknowledgement that there's an element of it that’s going to go wrong.

Jabinh: It's honest. For example, for somebody to say they don’t know when they don’t know. It's valid in lots of situations. To me, this is an area where journalism plays such a vital role. Because you're in a position to ask the right questions, to ask difficult questions, to essentially help hold people accountable.

Hope: Absolutely, and I think one of the reasons why I founded my company was because I want those journalists to be able to do that as much as possible. You meet the publishing demands by engaging someone like us.

So we'll do all the stuff that takes up so much time so you can do the highest value stuff. So you're asking the right questions, you're reporting on the right things. And the same for your brand. But I think having the fail-safe in there and having the questioning in there is really the most important thing.

Would you like to give any closing? 

Jabinh: I think the key things to me that are issues at the moment are data bias. Also the thing I was saying about regulation, that I think I would like to see a stronger global consensus on things like safety and transparency. 

I think there are far too many companies which are kind of saying, “well, we don't really understand fully how this works” or “we're not going to expose that because it's commercial.” And that is going to be increasingly an issue as we start to run into problems, which I think is inevitable because it's the nature of a new technology.