Episode 38: Values and Ethics in Design with Morten Rand-Hendriksen

Interview with Morten Rand-Hendriksen on the topic of Values and Ethics in Design and Tech

Episode 38: Values and Ethics in Design with Morten Rand-Hendriksen

In this episode, we continue a critical conversation about the challenges we face together on computer science, ethics, and unintentional consequences with emerging technologies that lack defined purpose and regulations.

Morten Rand-Hendriksen has published countless valuable courses on LinkedIn Learning reaching hundreds of thousands of viewers from all over the world. He also contributes to the web community as a public speaker, author, educator, developer, and design philosopher.

Related interesting reads or resources

Full Transcript of this Podcast Episode

Transcript

Cathi Thanks for joining us, Morten. We'd love to learn a little bit more about your team and your work and where you're at these days. What are you working on?

Morten Right now? Well, for the last probably year I've been working mainly on AI things because AI came into the regular world in November 2022 with ChatGPT. And because I'm a technology educator, everyone automatically comes to me and says, Hey, what is this? What does it mean? How can I use it? What is it for you know, all those questions? My job is to help people navigate those types of things. So that's what I'm working on and what my direct colleagues are working on. I have been working on for the past. Yeah, more than a year now. So it's a big disruption in the technology world. And there's enormous amount of uncertainty around it because it's just poorly defined. It's a product without a purpose. AI as a whole as a product to other purpose. So a lot of people are trying to figure it out. And there's a lot of hype and a lot of myth making and my job is to help people navigate all of that.

Cathi That's a lot and not only are we going to look back some day on the things we probably wasted our time on in terms of AI Oh, we thought this was going to be such this thing and it wasn't, but I know it also ties very closely to tech ethics, and that's a big part of your work and plus beyond the boundaries of ethical practices and guardrails and governance and I'm, I'm anxious to hear you. Share your insights around where you think AI goes too far. Things you're worried about. Things that you think are really great solutions for AI and where it's ethically powerful and making a big difference as well.

Morten Yeah, there's a lot this sort of, sort of nebulous issue squared, right? It's, there are multiple layers of things that surround ethics and AI because ethics and AI is a sub subsection of technology and design. And we already don't do technology and design ethics at all. Not that we don't do it, well, we don't do it. And then we have this subsection for just ethical AI or ethics in AI that is that adopts all of the issues of technology and design ethics plus a bunch of new things. So for me, it's been interesting to watch the evolution of this conversation over probably the past five to seven years because when we some of us started talking about the need for ethics. In design and technology. A while ago, and not to say that that conversation was already happening. I mean, there's a famous paper from like 1974 by Mario Bunge which is which is titled something like I don't remember the exact title but it's something along the lines of towards techno ethics, where he outlines how technologists are crafts people in the same way that other people who build things are craftspeople, like engineers, and engineers are held to ethical standards, but technologists are not and that's a huge problem. And then he breaks down why it's a problem in 1976. And then, nothing happened for the next 50 years. Right. And then now all the things that he predicted, predicted around, you know, big, big societal shifts happening because of irresponsible ethical practices around technology are actually taking place and hasn't taken place for a long time. And then we get AI into the mix. So if we try to, if we try to go from the outside in, you can say that the the over arching ethical responsibility that we have as technologists and designers, is that when you design things for other people, you are shaping their possibilities of going into the future you are creating or limiting their capabilities. In their own lives. And you are creating a path for them to walk down into the future. And that path is decided by your visions of what the future should be like and your values and your own morals. And if you work for a company they are shaped by the company's vision, moral and values. And because you work in the technology industry, and with technology, you are adopting all the visions of the future and all the values and morals of whoever created the technology you're best basing it on. Right. So there's this prevalent notion within technology. That technology is value neutral, which is absolute nonsense. It's it's the most explicit washing I've ever seen of say like, Oh, I just make technology. I don't this is not a political act. I'm not doing anything. And it's like no, it is very explicitly political and that you are defining what the future is going to look like for other people and what capabilities they will have in those futures what they can and cannot do. So over the past what maybe decade there's been a slowly growing awareness of the near the necessity of ethics into technology, especially a lot of that has come from the blowback over social media and surveillance capitalism, that large social media companies like Facebook, have been gathering a lot of information about us and then use that information almost in an adversarial way against us to program our behavior towards the benefit of advertisers. And with this growing awareness of the problem comes a growing set of questions around why why did this happen? Why is it that these big technology companies are doing things like for example, doing social experiments or doing cognitive experiments on users without any oversight? And the answer was all technologists. Are not technologists are not beholden to any type of professional ethics. And when they do things that are harmful, they are not held to account for it and the company is not held to account for it very worst thing that could happen is to get some sort of remedial sign which does nothing. So there was this very strong push towards that it's often referred to as the tech lash. And that of course, they technology companies very, very rich, very powerful technology companies poured a lot a lot of money into pushing back against that in a combination of ethics washing which is you say that you're doing ethics, but you're actually not. And you're also but then they also put a lot of money into like removing regulation or protecting themselves against regulation, and things like that. But things were happening so big companies were hiring ethical oversight teams and at the gold standards teams and building internal epics, and then this AI thing kind of exploded out of nowhere. And I say that in a very, like it didn't explode in the large scheme of what artificial intelligence is, but it became something that was in the public eye in November 2022 Out of nothing it went from people don't interact with AI systems directly to everyone is interacting with AI systems directly overnight with the release of chat CBT and in response to that, a lot of these big tech companies immediately realized that the ethics teams that were on staff had been working on AI ethics for a long time. And all of them were saying this is super, super problematic. So they fired them, because that's the easiest way of getting rid of the problem right? You just remove the people who say bad things. And so a lot of these very loud voices that have been working on these problems for a long time, which happened to be mainly women, mainly people of color, mainly from more or less marginalized groups were kicked out of these big companies. And then they've been operating on the side, starting up new organizations and doing a lot of oversight work. And a lot of them have now ended up working with governments and entities and multinational entities to try to do something with it. And they're now sort of working in opposition of the technology sector, right to say, since you didn't listen to us, and you didn't self regulate, we're going to help the regulator's regulated but then there's another section of that which is the the regulators so governments and institutions and the people who make laws and rules about us. They are heavily influenced by their by lobbyists, and by people who have a vested interest in things because they can pour endless money into educating the policymakers and the result of that is a lot of the policymakers have a skewed understanding of technology that skews very heavily in favor of large technology companies, because they're the ones that spend the most money. So it's really complex. And this This is just like, this isn't even I haven't even mentioned any of the actual ethical issues. This is the environment in which we are talking about ethics or technology. So the environment itself is fraught and very, very large and complex and the socio economic situation that this stuff is put into is very complex. But then we get to the actual ethical bits. So I don't know how you continue. Just keep going here?

Cathi Yeah, no, I think it's I think it's fascinating. Sometimes I am in situations in my role as a researcher where I'm advocating for certain ethical decision making at a C level executive level, and the only hope I have is that the business need will become so huge financially that the shift will turn and an ethical decision will happen to align magically, with business decisions, and that's a terrible place to be operating from. So I fully understand the complexity of developing regulations. And being heard and helping people make the right decisions for human beings. That being said, I will ask you this question and just keep going with that. We love your perspective. And I think especially when you tie it to design, decision making and the responsibilities within that area is just so so so valuable. I wonder if tech, big tech had universal regulations. If that just wouldn't help out because it's such a cop out all the time. Well, we can't make a ship because the rest of the industry is like, it gives people an hour whereas I think a lot of people actually do want to do the right thing. And if there was a regulation in place, they would everyone would be forced to be doing more of the right thing. Does that make sense?

Morten It does. And again, this this problem is caused by inaction over decades. So if you go look at comparable industries that have direct impacts on people's lives, be that medicine or psychology or finance or law, or engineering. What you'll discover is all of these industries at some point realized they were having significant direct impact on people's lives. And they not only self regulated, but they created a set of rules to follow and then they handled the rules over to government entities and said please enforce on our behalf, which is why if you're an engineer, and you build a bridge incorrectly, and the bridge falls down, you the engineer may end up in jail, and not only me, you end up in jail, but if anyone knew about this and didn't raise a flag, they could end up in jail and the company could end up holding or being banned from doing a future of work. I have a friend who's a structural engineer and she told me that if she a part of her ethical responsibility, professional ethics is if she is aware of someone either within her company or outside of the company who's doing something wrong, and she doesn't report it. She has held us accountable for the problem as the person who did it and that means not only would she lose her license and probably face stricter penalties, but her company would be penalized to such an extent it would be hard for them to continue working and that the mark that's put on your record carries with you so you couldn't like just say, Oh, well, I screwed up at this company. I'm just going to switch over to another company and everything is fine. And that applies across the board. Like if your dentist does something wrong, and it's wrong enough that other dentists deem it to be something that was irresponsible. That dentist is not going to practice again. Right. Same with the psychologists. Same with doctors. Same with lawyers, same with accountants, these like conditions. Yeah, like at all of these industries that impact people's lives have realized that they do that. Tech and design stand out as the outlier here, because I think this is just me hypothesizing tech and design feels as if it's far enough removed from the user and the consequences that it seems okay to say that we're not responsible and because a lot of especially like actually both tech and design, you're in this situation where like, I'm just drawing graphics, or I'm just writing code. I don't see how that has a direct impact on people's lives. And if you try to bring up this conversation with people you always hear things like a calculator can harm anyone, or, you know, the font I choose has no impact on right, which is very silly argument to make. Because if that were true, you wouldn't do any of this right? If it was true that writing code has no impact on people's lives, we wouldn't be writing code because it would be pointless. I the reason why we write code is because it does things to people it gives people capabilities, right? The reason why we design things is because it does things for people we are conveying information through design, we are conveying interactions through design. So the argument that it's just code or it's just pixels on app, like may appear as a sound argument if you remove your thinking from the conversation. But the second you start thinking about what the craft actually is. to design something or to build something or to write code you realize it's very explicitly trying to do something to other people. Right. So so that's one end of the conversation. The other end of the conversation is there is this prevailing thought within technology, that if I don't do it, someone else will. True cases. True, right? So this is like I'm watching a documentary series on the nuclear bomb right now. And you can very much see that idea. It was like the reason why the Americans built the nuclear bomb was because they thought the Germans were believed. Right? And the reason why they deployed the nuclear bomb was because they they thought if we do it first, no one else will. Because they will be afraid of us retaliate. And then you get the Cold War, right? We're both we're like both the the Western powers and Russia are like building up Arsenal's to say like, well, if I'm gonna threaten you by saying if you do it, then it's this. It's this very reactive way of thinking. And that has come into our technology space in a really real sense that you can see companies say, we have to do this even if it's problematic because if we don't we lose competitive advantage. Because it's all about money. It's all about getting money for funding for doing things and it's all about just unbridled capitalism, in the most extreme sense. That people just chase money, regardless of the consequences. And then in addition to that, you also have this science for the sake of science component, which is a lot of people will pursue things just because it's interesting. They're chasing technical sweetness is a is there a way of doing this? That would be really cool if we could do that. And then there is no thinking about what does that actually mean to do? It's more like can I get a computer to do these things? So there's, they're these three different three different levers are all pulling in the same direction, which is basically saying any type of regulation or any type of self regulation could potentially stand in the way of technological progress and economic opportunity, and science. All those three things could be blocked if we try to do any type of regulation, which is why anytime someone says hey, we should regulate, someone will always come back and say, but if we do it, we'll lose to someone else. Right? It's like and for AI, the argument is always if we regulate, we will be beaten by China, which is a bizarre argument because China actually does a lot of regulation internally around that, like there's, you know, this is a complex issue, but China actually does quite heavy regulation for AI systems that are released internally, but don't regulate the same way for AI systems that are released to the rest of the world because they have a firewall and they can control what happens inside and outside those walls. But it is that is very much the argument. That's being made that if we don't do it, someone else will. And my pure ethics response to that is, but we don't do that for anything else, right? We regulate harmful drugs. We regulate nuclear technology, we regulate human cloning. We regulate lots of things internationally, right? We have international laws against biological weapons, and when come when countries are seen to make biological weapons, there are significant, significant enforced consequences immediately right there. We can do regulation. What is required is politicians being willing to do it and doing it in an informed way that is informed by experts who actually know what they're talking about, and it requires the industry acknowledging their impact on the world. And, like I said earlier, the big issue here is that the industry is very, very reluctant to acknowledge its its responsibilities. It's very eager to show how it has positive impacts on people's lives. But if you bring up negative impacts, there's a very aggressive push back, like a rear ending, in some cases, aggressive pushback, because you dare speak up and say, hey, you know, these things actually impact people's lives.

Cathi So it's bad actors. The negative impact is part of everything in tech. And to just look at, you know, the glass half full is to miss the opportunity. To solve even bigger problems. Yeah, it's discouraging.

Morten There's a so you use the term bad actors. And I find like this is where it gets extra tricky, right? And welcome to Morton's, neurodivergent brain where everything is more complicated than it seems. The the notion that people who build technology that is demonstrably that, let's say the team that made like an AI gaydar that would look at a picture and say whether or not you're gay based on the picture. The if we have any inclination to say like this is a bad person, or this is a bad actor or they have bad intentions. In a lot of this, if you go look at what they're doing, you realize that not only are they not like, necessarily like you can't make that inference that they are bad people. But more importantly, they seem entirely unaware of the consequences of their actions because they are doing work that is more science based than it is reality based, right. So I went and read the paper about this. This facial recognition system that was supposed to be able to determine your sexual preferences. And the paper was a standard artificial intelligence paper around facial recognition and emotion detection. And it was very clear from that paper that they've just went out and said, What could we possibly get a computer to detect that isn't immediately obvious to people? And then they just randomly picked sexual orientation? Because that's a challenging problem. Right? It's not because they want to make a gaydar is because that's a challenging technical problem. And they're like if we can make the computer do this, because these AI systems are able to surface information that we can see. That would be really interesting from a science perspective, right? And then the problem is once you make that technology, whether or not it works is irrelevant, because there are people with political motivations in the world who will see that and say, Hey, number one, if someone was able to do it, we can do it too. And number two, this suits my political agenda, so therefore, I'm going to use it because there are countries in the world like Russia, where being non heterosexual is borderline illegal or actively illegal, like you can be thrown in jail or worse for just having a sexual preference outside of heteronormative. Right.

Cathi Yeah. And the consequences is that you empowered these bad actors to harm people. Yeah, like and brutal.

Morten And so even even though the intent may have been purely science, can we get a computer to do this novel thing? The consequence of that intent may be extremely serious for people you don't even know about and that's why in most science, you have very strict limits on what you can do, and a lot of ethical oversight. And it's kind of breaking up this computer science level because computer science is about the computers, not about the people, right when you do a research project like something around facial recognition, you're not actively engaging with people, you're actively engaging with random image sets that were provided through the internet, right. So there's a distance from the research to the person that is technically impacted or would be impacted in any other type of research project because they would physically come in and do something. And that's enough of a distance that you can skirt a lot of these ethical oversight things on the science component, and that means that we need to change the guidelines around this type of research. But it's very, it's complicated because of this right? Artificial Intelligence. didn't arise because someone said we're going to make sentence computers and let them take over. It arose from computer scientists going can we get computers to perform actions in such a way that that a human would be unable to tell whether or not it was a computer doing it or another human and then it just that would like in the 60s, and then you just keep piling on there? Can you get a computer to recognize a face and a picture? Can you get a computer to then see if the same face appears in multiple different pictures? Can you get a computer to replicate movement in people? Can you get it to write texts that is indiscernible from human text? Can you get it to translate text from one language to another? While carrying the meaning over? Right? These are all these are all computer science problems.

Cathi They're all just is it possible to do things, but this is this is also just tactical research being done. It's not strategic research. So one of the areas that I work in is strategic UX research, unlocking what all possible unintended consequences could be through our work, not just can we do it, or how can we do this faster? Or how can we do tactical research on solution based but like really defining the outcomes before we invest all this money and energy into something that has unintended outcomes that we don't want?

Morten Like yeah, so yeah, you don't? You're not working with researchers. You're working with product people, right?

Cathi Right.

Morten The difference here is that a lot of technology comes out of research, and the research doesn't air in the same way that a product company or product project would care about. The consequences because the research is around the technology that can you get the technology to do something for the sake of seeing if we can do it, right. There's less thinking around what does that mean? If we do this, right. And again, I go all the way back to like the nuclear weapon research, when, if so before the nuclear weapon was developed, when you look at the scientists who were coming up with the math for splitting the atom, none of them were thinking this is a bomb. It wasn't until the research was done. And they were like, we're able to do this. And then they looked at the results of the research and they go, oh, wait a second. This is a like an endlessly progressive progressing expansion. So wouldn't that mean that if we put enough density in the system, it would just get a chain reaction and you would get this enormous explosion out of it? Oh, that's a bomb. And then World War Two happens, right? And then they go, Wait, can we do that as a thing? And then the scientists are like, yeah, that's an interesting science. Experiment. Let's see if we can and only when the bomb actually explodes. Do they go? Right. That might not have been the best idea to do but at that point, you know, this cat is now firmly out of the bag, and it's actually 52 cats and they're running everywhere and doing all sorts of random things. So then, the step from a successful test of a nuclear weapon, to the deployment of a nuclear weapon is like months, right? Because once the technology exists, you can actually get rid of the scientist right? And all the people that are having second thoughts and just be like, Well, we know how to build it, we're gonna build it and use it. And then deployment happens after deployment, that people who do it go, this was not great. Like Eisenhower actually goes and says, we should not do this. We should just not use these weapons anymore. But by that time, other companies have seen it's possible. And because the science exists, it is now inevitable that other other countries will do it. At which point you get a cold war situation, right? And the only way to stop that then is to regulate the science itself. Which is super, super difficult. So then you have to regulate the materials. And say, if you accumulate enough of the materials, you should be regulated in some way. Right? And that's like, gonna look at all the situations around nuclear technology. And you see how this rolls out, right? So it's very much that type of problem that we're facing, but with technology and the difference between nuclear technology and computer technology is well nuclear technology is technically possible for anyone in requires fissile material. Right?

Cathi Right. Right away. My grandma can't get her hands on that right?

Morten She cannot obtain any fissile material, just like none zero, absolutely nothing. You could go dig in the ground and find something that is moderately radioactive, but that's as far as you'll get for computer science. You can go buy the cheapest possible computer, you can buy a Raspberry Pi, right for 20 bucks. And then you can download some software off the internet and put it together and make something extremely harmful. And it'll take you two hours regardless of AI like that is that is the accessibility of technology, which which technically means that the onus of responsibility for the developers of technologies infinitely higher because the consequences of doing something bad are so infinitely worse. A real world non AI example of this is crypto currencies, and their relation to ransomware. Ransomware is now one of the biggest threats to public infrastructure and to the world. Like stability of the world political system. Because ransomware is ubiquitous, very easy to build. You can download ransomware from the internet from the dark web. It's not particularly hard to build. And because we have cryptocurrencies that allow transactions that are non traceable, there is a monetary incentive to do it. And that's how you ended up with 12 year olds mounting ransomware attacks on hospitals. It's and state actors mounting ransomware attacks on public infrastructure like electricity supplies or water supplies. It's predicated on cryptocurrency if there was no cryptocurrency ransomware wouldn't have been developed, because there would be no money in it, because it wouldn't be possible to get the ransom paid without being trace. So the end the reason why the reason why we have a lot of financial regulations is to prevent this type of activity. And the cryptocurrency crowd, not fully understanding the reason behind financial regulation just saw regulation as I can, as a harm as a blocker to progress and we're like can we make a system that shirks regulation and makes it impossible to regulate? So they build the system for it? ship that into the world and then you get a new type of crime? That is almost impossible to stop. And the only way to stop it is for everyone to stop using the cryptocurrencies because if enough people stopped using it, then the systems don't work. And then it falls apart. But there's a financial incentive and true believers within the systems therefore it'll perpetuate. So you can take that type of thinking and apply it to a lot of things and you'll see that there's a very strong pattern here of lack of lack of understanding of how the world works, plus some utopian dreams about how the world could be that is fairly limited in their scope, and then massive consequences on the other end of that.

Cathi And so we're sort of at the precipice is that the word for AI and and how that will play out in terms of AI?

Morten Yeah, and I want to be very clear here that there are parts of AI that are problematic. There are also parts of AI that are extremely promising. I think the conversation around AI has been severely polluted by generative AI specifically, and the way generative AI was rolled out to the public. And when we look back on this 50 years from now, I think a lot of people will say, Chat GPT wasn't unfortunate, because of the way it was rolled out and the way he got picked up. It happened. It's funny if you go look at the original posts for the release of ChatGPT it's like, yes, we're releasing this little research project to see how people will interact with a large language model if they could just talk to it. And then everyone is like, our machine, I can talk to what and then they start talking to it and they go wait a second, I can do all this crazy stuff with it and then the media gets a hold of it and it just explodes and you can tell like when it was happening. And he kept seeing ChatGPT go down constantly because there was being overused and then we throw like 10 times more power behind the images immediately crash and then we put 10 times more power and they would just crash because the scale was so enormous, right? This was unanticipated and I remember there was some technology leader but at some point said this tape technology was released five years too early, like five years ahead. It was supposed to be released five years from now, right? It was supposed to have been tested and productized and before it got released, we accidentally like dumped that into the world before the plants were there before any of the safeguards were there. We haven't thought like this. This wasn't supposed to happen. And it was like almost an accident, right? Because Dolly was already there. I knew I had like moderate interest and all these other things at moderate interest. But looking back on it, of course it was gonna take off because it was a talking computer. Right? Talking. Talking nonhumans is a thing that we are wired to respond to. That's like if you have a parrot and the parrot can make human like sounds, you can't help but think it's talking to you because you're wired to think it's talking to you. Even if you know you're like you're the person who trained and you know for a fact that does not speak and it does not what it's saying has no meaning. You can't help but give Polly a cracker because Polly wants a cracker. Because it's saying it's like it's our API is our programming interface is language and we when you give that to computer of course people are going to talk to it. And in hindsight, this was not a shocking revelation. This was because we've already tried this before. We've tried to put bots into the world and people do talk to them. And then they immediately tried to get it to do something problematic, right?

Cathi That's true. So being dyslexic, I have mild dyslexia, the language support that I get out of that is transformative for me. You know, it's amazing.

Morten There is enormous potential in this technology. The problems with technology lie in number one, that we're focusing on the wrong things, right that the way that people currently understand. Generative AI is as a replacement for human creative, creative work, right, who use it to do work for you. That is creative. Work, you use it to write an article or generated an image or something like that. That is, in my opinion, really bad, like, walk walk off into a desert, because that's not where the value lies, but it is where it is how we have monetized the internet. And because we monetize the internet, by content creation, of course, people are going to use machines to do the content creation for them, because they can make more money that way. But the consequence of that is eluding the internet garbage, right, and essentially poisoning the internet, which is what's happening right now and hopefully we'll get to a point where people just get off this bandwagon because it's people are you can start to see it. Alright, and people are starting to realize that all this AI generated garbage is garbage. Right? Once we get past that, you'll realize what the AI systems especially if we're talking just about the language based systems, what they're good at, is transforming texts. And that can be anything from like take a big piece of text and reduce it down to a shorter piece that has the same content, to translating it to a different language and carrying the meaning over to expanding text or changing the reading level of texts and all these types of things. And they're very good at translating our human requests into something programmatic. So you can use AI systems to or you can input a request in like as a sentence or talk to it. And then it can pick out the necessary pieces of your request and send those the three pieces over to a computer system to perform some action. And then when the data comes back, you can parse that data into something human understandable. So it's very much an interface. And once we use AI systems as an interface, rather than a generative system, then you all of a sudden can add new powers to existing systems, right? You can you can make existing systems far more accessible because you're going to talk to them in human in natural language and get a natural language response, which can revolutionize a lot of things that we're doing in our lives. And like you said, like the accessibility component is enormous. The fact that I can now take an academic textbook and transfer transform it into an audio book without having to pay 1000s of dollars to some person to sit and read it. And the fact that I can, if I was a student, I could take my textbook and say, quiz me on chapters two and five, and have an AI just help do that. For me. I just finished a data science class at university, and I did all the classes. I also built my own AI agents to help me study and the agent is what helped me ask the class now because I was using the agent to do any of the exams, or do any of the tests or the quizzes. I didn't use my ability to learn Yeah, like, it was like, I'm reading the information I'm getting from the teacher or whatever it is. And I'm like any more information here. Can you expand on this? And I am like, infinitely expand on it right? There's so however much I want, or I say like, I don't understand this example. Can you explain it to me, or can you give it to me in a different context, right? To help me understand, so there is huge value in it. But you'll notice that none of this value I'm talking about is content generation. Right, right. And a lot of the concerns we have around AI if you go back all the way down to like you and me as practitioners when we interact with AI are there ethical concerns that we should have around it, right? So there are all the issues around bias and AI there's all the issues around content, where the content came from if it's duplicating content, all of those pieces, if you use them as content generating systems, everything under those categories, surfaces. If you use it as a translator, or condenser, a lot of those issues go away because you're not asking you to come up with a new idea or make a new piece of art. You're asking you to take an existing piece of work you already have, and do something with it to make it shorter or longer or broader or whatever it is, right. Yeah, it's a very different purpose. And it also means that if that's what we're going to use AI for, then we can retrain the models without all that copyrighted stuff because it's not necessary because we're not using the models to do all these genders and things. We're using the pelvis, right. And then on the side of that is all the other AI where it's like all the medical AI work where you're just like, analyzing X rays, or looking at blood panels, but these types of things. Once we get to a point where AI systems work better than humans. And you have human oversight in the process just to make sure they're not going off and doing weird things. There's again, huge potential. There is a future in which we have self driving cars that actually work. There is a future in which we have self driving planes that work and trees that are far safer than humans. But that requires the work being done with human protection in mind, rather than just the technology component, right. Right now self driving cars are a feature that's being used to sell more cars. It needs to be a feature that's built to save more lives. That shift in focus is enormous.

Cathi Beautiful, yes. Wow. And that would be an intentional consequence from the design as opposed to an unintended consequence which I don't know if anyone's written a book called unintentional consequences, but it feels like today. We're right for that important book to be written.

Morten Yeah, I know. It's really, to me it falls down to this there's a branch of economic theory that also overlaps with philosophy. That's called capability approach that was developed by Amartya Sen. He's an Indian economist. He won a Nobel Prize for this long time ago and he developed it to help the World Food Program. Now the World Bank, were like one of those to figure out how to give people help that they actually need because a lot of help for developing nations and nations in crisis is very much like a rich nation things if I was in trouble, what would I need and then we'll give the same thing to the people who are in trouble. And then it turns out that there's a very fundamental mismatch of needs, right? And so he he started looking at what actually works. What Works is giving people the capabilities, that that helps them in their context to do the things that are valuable and meaningful to them. And then giving them the agency and autonomy to choose how to use those capabilities or whether to use the middle, right? And the classic example is instead of giving everyone a bicycle, you give everyone the type of transportation they need. Because not everyone needs a bicycle. Not everyone can use a bicycle. And not everyone has the capability of fitting a bicycle into their lives in a meaningful way. And there's like a sub genre of that which is if you give people a bicycle, and they say you must use it as a bicycle, you're severely limiting their capabilities. Because it turns out when back in the 80s, a bunch of like a couple of Scandinavian countries gave a lot of bicycles to countries in Africa. I don't remember the reason why but there was a big push to like donate your old bike to Africa. I think it was I remember when, when I went to school, it was specifically Mauritania. I'm not sure what caused this. But the side effect of that was a bunch of people realized they could use I could like dismantle the bicycle and use it to build very efficient water pumps. Right? For that to happen. You not only need to give someone a bicycle, but you have to give them the autonomy to say I don't need a bike. I'm going to pick it apart and build something else instead. And a lot of emergency aid programs are designed in such a way that you're only allowed to use emergency aid in the intended way. And if you don't, then you lose the funding. It's not just emergency aid. It's like a lot of things in our world is very much like that. We give you something and you use it the wrong way. Then you lose it. And what Sam was saying is no, don't do that. Allow people to decide how they want to use their capabilities, because they are they will make better choices about how they want to live their lives. And he makes this distinction between the things we value and the things we have reason to value. Were the things we value are things like good food and you know nice company, comfort, watching TV shows breast of things. We have reason to value our roof over your head when you're sleeping. Food is not going to kill you access to health care and education, things like that. And our efforts when we build things and design things for other people should be focused on giving people the capabilities and functionings to do and be what they have reason to value over everything else and if we apply that way of thinking to our design practice, it changes our design practice from being one focused on return on investment, or business goals or whatever to how do we reach our main goal in our interaction with the customer because if you go and look at any design project, doesn't matter what it is. You will notice that at the very core of the design project is a designer or a product creator or an inventor saying I'm seeing the world and I see how I can make it better. And I'm going to do it through this particular designed interaction. And then at some point, they need funding, and then the funding becomes great becomes an economic thing. But the core idea is always a vision of what the future could be for people if only they have this capability. So if we then say that is that should remain the focus of design that should be how we measure the success of design, not in how much money we earn. But whether or not we're changing people's lives in the way that we intended. And that those changes are given to the people in the form of capabilities that they can choose to use in the way that makes sense to them. Because if you're successful in that endeavor, the money will come. Right. It becomes not you're not doing it for the money, the money becomes the consequence of the success of your capability change.

Cathi Right.

Morten And that's how we tie this all the way back to the beginning. I said, one of the big challenges with AI is that AI is a product without a purpose. Most of AI especially the AI that we interact with day to day, all this generative AI stuff, our research projects to see if we could get computers to talk like us to make images like us to make videos like us to translate audio, things like that. These are tools without a purpose. And we have to figure out what those purposes are. And we are in a unique position now because these technologies were dropped on us five years before they were ready. Meaning we get to choose how we want to use them and we get to choose what type of future rebuild with them. Because the companies that build them haven't had time to figure out the economic model yet. And if we do that in a responsible way, we can actually get somewhere positive, where we can build livable futures for ourselves. And the people around us.

Morten Thank you, Morten. It's so important to to focus on the gains and value we can create for people beyond just pumping out abstract solutions because we can and I thank you so much for your time. I really encourage everyone to take your courses, and I'll put links in the show notes to your recent talks. And are you going to be giving any more talks in the near future that we should know about?

Morten I don't have any current plans. I'm working on trying to start up work again on my long suffering book project around this topic, but not the big topic but around this. I have. I've been trying to write a book about this for a very long time but I have a wife and a son and a wife, a house that means methods and a job and other things are happening in the world. So it's challenging.

Cathi Keep at it incrementally as you can. Don't be so hard on yourself. Keep going to the crossing guard with the kids every week. Life is short. Enjoy it and thank you so much for your time working thank you all right, this has been a fantastic podcast episode. Thank you so much.

Scroll to Top