Digital Empathy: Is Artificial Intelligence Revolutionizing Mental Health?
June 11, 2024
19
00:58:5349 MB

Digital Empathy: Is Artificial Intelligence Revolutionizing Mental Health?

The use of AI for mental support is rapidly increasing. Research shows that AI designed to understand and respond to human emotions can help people combat depression, avoid suicide, and improve their human relationships. We examine both the potential and ethical questions arising from using AI for mental support. We also discuss how using such technologies can create new opportunities and challenges for organizations. Sources Discussed: Ayers, J.W., et al. (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Internal Medicine, 183(6), 589-596. Li, J. Z., Herderich, A., & Goldenberg, A. (2024). Skill but not Effort Drive GPT Overperformance over Humans in Cognitive Reframing of Negative Scenarios. PsyArXiv Preprints. URL: https://doi.org/10.31234/osf.io/fzvd8 Maples, B., Cerit, M., Vishwanath, A., & Pea, R. (2024). Loneliness and suicide mitigation for students using GPT3-enabled chatbots. npj Mental Health Research, 3(1), 4. Sharma, A., Lin, I. W., Miner, A. S., Atkins, D. C., & Althoff, T. (2023). Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support. Nature Machine Intelligence, 5(1), 46-57.

[00:00:00] Welcome back to The Management Lab. I'm Sean Hansen from Saunders College of Business at Rochester Institute of Technology. And I'm Uri Gal from the University of Sydney Business School in Australia. Sean, good to see you. Uri, it's good to see you too. I think the last I meant

[00:00:35] to try to add some energy to my intro this week and I didn't do it because the last episode I listened to it and it was like oh my god it was like I was on Valium. Yeah I'm thinking should

[00:00:45] I be should I be taking this personally? Is this my effect on your levels of energy? Do I just cause you to be deflated? I'm gonna say yes. I'm gonna go with yes. I do have

[00:00:56] some people granted. Yeah right if it were just us talking in private I would have inserted a joke there for sure. So this week we're doing a little bit of a return. It's not

[00:01:09] quite a return but it relates to some of the topics that we've covered before. So one of the things we talked about earlier was emotional AI within organizations and we also talked a little bit about algorithmic aversion. And so I think this topic relates to both those

[00:01:25] and it's pretty interesting. I gotta say the stuff we read kind of surprised me. So the topic is the use of AI in mental health counseling. Is that a fair label for it?

[00:01:39] Roughly speaking yeah I would say it's the use of AI in such a way that the AI technology is expected to produce empathetic responses or engage in empathetic interactions with people. Okay that's good. Yeah the ability of these tools to show empathy in a way that

[00:02:01] would be interpreted as empathy by actual human beings. Yeah show empathy is an open question. You said you were surprised. Why were you surprised? Oh I'm the evidence seems very clear to me.

[00:02:16] Based on the research. So coming in I'll just give you my priors. Coming into this I would think that this is one of those areas where we would say traditionally humans have an advantage

[00:02:29] over AI right? Like we are better at certain things and one of them is being human right? And expressing humanity to others. Although looking at some people around us I you know

[00:02:40] I have to question the premise. Yeah yeah it might have been a flawed assumption and the evidence really strongly suggests otherwise. So we looked at a number of different studies and in each

[00:02:52] and they varied slightly so some were responses on online fora like reddit or what's the other one that get referenced here? Talk something talk. Yeah it was like an online support platform. For people who are psychologically distressed. Yes yeah mentally distressed. But they used reddit

[00:03:17] as another example that people basically go to these online platforms to get advice from others right? Advice about challenging experiences in their lives or any of a number of things and as we scan I'm sure I'll come up with the name of the platform.

[00:03:33] Talk life it's called talk life which I was not familiar with before. And the data seems to show very clearly that the AI is better at again I have trouble with the words like showing or expressing empathy. The AI generates responses that are judged more

[00:03:56] empathetic than humans across the board across the studies we looked at. I'm sure we could find some that challenge that but this is a whole sample of studies and all of them seem to suggest

[00:04:11] in these different domains that the AI gives better answers you know or at least answers that are deemed more constructive and more empathetic. By actual human beings. By human beings so in almost all well in one case they used two judges,

[00:04:28] one case they used human beings to judge it. So like a panel of in some cases it was a panel of experts mental health professionals in some of the studies it was humans that had been trained

[00:04:40] on cognitive reframing which I think comes from cognitive behavioral therapy and but basically humans who had been trained were then given examples of responses without being told whether it was generated by humans or by AI. In most of these cases these are just large language

[00:05:01] models chat or GPT-4 being one that popped up repeatedly chat GPT which is driven you know built on top of GPT-4 in other cases in one study we actually we see where they actually built a separate application to suggest advice for peer supporters

[00:05:21] but none of the human judges are told what the origin is and they're said either asked to rate the constructiveness the empathy or empathic-ness empathy yeah and they judge the AI results better right consistently. There was this one study where they use replica and it's driven by chat

[00:05:46] GPT as well I believe and it's marketed as the friend who doesn't judge you and I'm paraphrasing of course but that's roughly what they what the tool is meant to be it's meant to be

[00:05:58] a companion or a friend or a counselor if people need support and they use this AI tool to get this help from an AI entity it's like an avatar and you can personalize it and

[00:06:14] customize it you can give it a name a gender you can decide how it dresses yeah and people have like real meaningful rich long-lasting relationships with their AI avatars and what's what that study found so before I read that particular study

[00:06:34] I had seen previous research done on replica this was quite critical because it was highlighting various ethical which I think is really natural right ethical conundrums that emerge when we when you have people interact with an AI in such a capacity that has to do

[00:06:51] with people innermost feelings and concerns and difficulties it's highly personal right it's not like a utilitarian interaction where you might I don't know you use AI to decide what movie to watch or where you want to invest your money or where you want to travel stuff

[00:07:09] like that where we've seen AI use many times before this is a different domain that's much more cuts to the core of what it means to be a human being and all our existential concerns

[00:07:20] right and so previous research was quite skeptical and critical of using an AI in such a capacity it was highlighting things like you know what if people get dependent on their AI and then suddenly it goes through an update and it changes character so you've basically lost

[00:07:41] your friend what happens then right yeah or or I think there were early examples of where the replica avatars actually encouraged people to commit suicide or something to that effect um by the way replica markets itself as the AI companion that cares right so they they don't

[00:08:06] hide the lead right like it's very explicit that it is artificial intelligence that that system is built on artificial intelligence and yet so that particular study was maples at all the results in that study were quite astounding right so they looked at a

[00:08:22] bunch of students who had used replica and modeled sort of the types of interactions and the outcomes that they have with replica and one that was most significant one of the most significant outcomes is that a certain subset and it was a fairly small subset of the total

[00:08:38] population but still um explicitly said that replica had contributed to them not committing suicide yeah it's pretty hard to argue that that that's not a positive outcome from the platform if if self-responses are are taken at face value that that stood out to me

[00:08:59] as well I mean that's that's that's a very positive outcome and given all the ethical questions that we may have about tools like this I mean the the positive consequences at least for some people are undeniable another thing that struck me from that study which

[00:09:17] I thought was quite telling I forget what the exact number is but a vast majority of people who use replica they score very high on loneliness scales right so they they measured people for loneliness and there was another thing was it some sort of proxy for psychological

[00:09:36] distress or maybe even depression bordering on depression yeah I think it was it was the so there was a loneliness scale and there was a separate one on interpersonal support evaluation list so I think it's the sense of personal support now of course that particular findings not

[00:09:56] super surprising right but the point I was going to make was that a vast majority of the users are lonely people yeah but but that's selection bias right because they're only looking at people

[00:10:07] using the platform so all that tells me is that the people who are inclined to use the platform are people who are lonely yeah so right I guess it's not surprising but it's it's educational to know that this is not a representative cut of the population at large

[00:10:25] right so we're talking about a very specific sub population that has very unique well they have unique characteristics yeah but not terribly unique though I mean one of the stats that came out of that study is something like 53 percent of college students in the U.S. report loneliness

[00:10:46] so more than half of all college students in the United States this is that's a reference to a separate study but more than half of all college students in the United States indicate that they're lonely so while yes it is a self-selected population it's one that

[00:11:02] represents I think maybe a much more substantive percentage of the population than we might be giving it credit for okay fine so they're not that unique I thought I don't know if it's if it's the same in other countries that that number in itself is quite remarkable more

[00:11:16] than half of college kids are lonely yeah and yeah that is not a good stat no no I I think there's lots of evidence and this might be a conversation for another day though

[00:11:28] that we have sort of a mental health crisis in in young adult populations in the U.S. yeah so yeah across the board we see this evidence that these tools seem to have very positive

[00:11:43] outcomes and it made me I will say reading all these gave me had me thinking in multiple different directions because I see so many parallels to to just other things pre-digital

[00:11:55] pre-ai but can I can I make a quick point that I think is important to clarify so what these studies find is not that AI has or is actually empathetic in the same way that humans

[00:12:12] are or are capable of right AI is not a none of these studies indicate or claim to indicate I don't think that AI possess the ability to show real human-like empathy it's just that

[00:12:26] the responses or the output that they produce is construed or interpreted by people as empathetic right so I just want to make clear that this is the point that we're making here we're not

[00:12:37] claiming that AI is conscious or is capable of showing human or of showing real genuine meaningful empathy and this what do you disagree with that uh no but I would play devil's advocate yeah which is if I were to ask any of my colleagues

[00:12:55] who are involved in in AI tools or maybe have a slightly different world view than myself they would say what's the difference how do you know right if it seems empathetic what's the difference between that and actually being empathetic so there's it seems like there's

[00:13:13] no difference from the angle of the recipient of the output because people respond well to these indeed there's a there's a huge difference in the angle of the recipient of the outcome and it favors the AI yeah that's true um what I meant is that they

[00:13:30] don't distinguish between AI generated and human generated inputs as being generated by AI or humans they just think that AI generated output is more empathetic but they don't necessarily know that it's been generated by an AI tool well yeah right right so the they explicitly don't

[00:13:48] right I think that would be problematic uh as far as a treatment the the judges there was one of the papers that did an algorithmic assessment too right where they used the algorithm to assess

[00:14:01] the empathy and accuracy and things like that of the output um but most of these were human judges looking at it and from the perspective of the human judges the responses generated by the

[00:14:14] AI tools seemed more empathetic they read as more empathetic so okay let me take an so from this perspective there's no if there is any difference it's actually in in favor of the AI

[00:14:28] versus the the human but so that's a very utilitarian view of things right because we only care about the outcomes and if we only care about the outcomes and it seems like AI is just

[00:14:40] as if not better than than humans at least in these domains where these studies were conducted at showing empathy but if we look at it from a more of a sort of a deontological perspective

[00:14:55] where the outcome is not the most important thing and in fact the only criterion that we should examine to assess the morality of any action or technology use usage in this case but

[00:15:08] rather whether it complies with some universal moral truth that we want to believe in or duty or duty one then then the questions that were that we need to ask are quite different and that

[00:15:22] goes back to all the ethical conundrums that that we mentioned before in relation to previous studies about these of AI in these in these cases but I think there's also a difference when you talk about the pragmatics and the practicalities of the AI industry because if somebody

[00:15:40] makes the claim that you know the and I don't know that anybody did in the case of replica for instance I don't think I don't think anyone serious is saying that replica is conscious or

[00:15:50] that it has real human capacity for real human empathy but if people believe in that then the whole AI market is gonna you know it has real repercussions for how much money is being

[00:16:05] invested in these tools how we treat them what we put them how we use them so I think it makes a difference a huge difference from this perspective as well yeah but what about the case

[00:16:17] of replica right like the users of replica ostensibly know going in that it's AI I mean again replica doesn't hide that fact so they know going in that they're getting algorithmically generated interactions and yet they they they like it and it helps them

[00:16:41] I mean one of the things that came out of that study is not only the prevention of suicide but you know that a significant percentage of the respondents said that it helped them in their human relationships yeah yeah it made them basically better companions to others it made

[00:16:56] them more self-reflective and and it thought it improved their relationships with other human beings yeah so I am on I've joined because I'm interested in the topic and I'm doing trying to do research in this area as well I've joined several Facebook groups and and

[00:17:17] Reddit communities replica communities and I'm seeing what people are posting and there are many people who at least on the face of it treat their relationship with their avatar like any human to human relationship they would post pictures of them with their avatar

[00:17:42] in different scenarios like you would post picture pictures of you and your wife on vacation my family members yeah so yeah at least on the face of it it seems like and based on the

[00:17:54] the the results of the study that we saw I think many humans view their relationships as as being as meaningful and as real as any other human normal human I shouldn't say normal other human to human relationship yeah there's complete buy-in even though people for sure

[00:18:17] they of course they know that it's not a real human being but perhaps like you said before what what's the difference it doesn't matter if I'm getting the same support and and by the

[00:18:28] way one of the studies mentioned some of the reasons people have such a significant and I'm paraphrasing buy-in into these relationships right there's the fact that it's non-judgmental it's non-confrontational right it's not going to be a bad friend in the sense of you know

[00:18:46] saying you know Sean you're being really annoying I don't want to talk to you right now it's not going to do that it's always there it's available whenever you are feeling lonely and you want to have a conversation the and the avatar is there for you

[00:19:00] yeah and where your where your human companions often aren't do you know how many times I want to talk to you Sean and you're just you're not available the time the time whenever we

[00:19:13] have a deadline whatever we have a deadline we're pursuing I wasn't even going to go there but yeah now that as a side note I think those those drivers for engagement are interesting it does raise the question about sort of clinical efficacy if we're talking

[00:19:32] about this in a mental health domain is empathy always the right response right like you said it wouldn't be like a bad friend who says you know you're being annoying maybe sometimes a good friend is the person who says you're being annoying right like this might this might

[00:19:48] be a longer conversation but sometimes what you need is someone who says no you're not perfect the way you are particularly if you if your hat if your mode of existing is creating challenges

[00:20:01] in your own life sometimes a good therapist says no you're not perfect just as you are you should you should try to improve aspects of your life you should change your behavior in

[00:20:14] ways that will lead to more flourishing none of those elements you know that deeper sort of clinical outcomes kind of thing were not reflected in any of the studies we saw yeah I wonder I wonder you know people always talk about what sort of assumptions go into the

[00:20:34] design of for instance self-driving cars which is in comparison a much simpler scenario because we have a relatively well-known set of criteria that we have to take into consideration when we

[00:20:46] design these cars we know what needs to happen we know what a good outcome is you know the car gets from point x to point y without hurting anybody else and we and even in these

[00:20:58] relatively simple scenarios there's so much complexity that has to do with all the different assumptions that not need to be baked into the code to make sure that this happens in the most

[00:21:07] optimal way now when you're dealing with an entity that's a therapist and a friend and a counselor I can't even start to imagine what set of principles have to be incorporated into the training data and the way that these models are designed and what kind of an effect

[00:21:28] they have on the user so you said about you talked about how you know do we want this AI tool to be very supportive or do we want to when prompted in some way to show tough love right indicate

[00:21:44] you know maybe you shouldn't do this because it's not going to be it may feel good now but it might actually you know lead to stagnation in in the longer term and maybe you want to make

[00:21:54] you work harder now even if it's not fun but you're going to be more successful in whatever it is yeah I mean what sort of assumptions are baked into replica for instance and what is the

[00:22:06] impact that they have on the users yeah I will say on the self-driving car point as far as I can tell the ethics being embedded there are almost entirely utilitarian I don't know if

[00:22:18] that's true with these other tools as well side note is getting from point x to point y a euphemism in Australia or or Israel or someplace I not that I know of I don't want to ask

[00:22:38] would you just use that that that hypothetical frame again certainly here we would say get to from point a to point b or maybe from a point a to point z but you know I don't think anyone

[00:22:48] would ever say point x to point y not now you're being careful you've been a bad friend get from point l to point m now you know what it is it's it's not Australian or anything like

[00:22:59] this it's um it's the statistician in me yeah yeah right okay x y axis good I can do a Cartesian graph no no it's not even yeah I guess it is that but it's also more prominently

[00:23:12] independent dependent variables in algebra yeah right right I got you I apologize all all of our native English speakers who are used to saying from point a to point b and maybe I'm crazy

[00:23:23] maybe I'm wrong um but yeah I think that that question of embedded ethics is is a good one I will say in in reading some of these studies where they some of the empathy pieces were

[00:23:34] were really quite wild to me um the replica one specifically made me think of several things and I wrote in the margin I wrote is this just a kind of therapeutic mirror

[00:23:45] and later in the paper they actually said that a lot of the respondents used that exact phrase they said it was a mirror of the self they framed the tool as a mirror of the self which

[00:23:57] is interesting uh which also made me think is this different is this fundamentally different than a diary right like I never kept a diary but you know the classic framing is like a young teenage girl would write in her diary and would actually write dear diary

[00:24:15] and then write the thoughts of the day as though the diary were you know an external entity is the replica avatar fundamentally different from that or is it just a superior form of that because it can actually provide feedback yeah so I mean isn't obvious

[00:24:34] that the most fundamental difference is that it's interactive it actually talks back in a very intelligent and human-like manner yeah right I mean that is the obvious difference but I guess my point is is it categorically different in terms of you know what was the

[00:24:50] purpose of a diary it was a channel for self-reflection and emotive output right sort of putting your your heart and thoughts out into the world but I would I would say this as well

[00:25:05] I would say that any relationship that we have where like you and I have we have this very open loving tender relationship where we can talk about our innermost you're gonna engender all kinds of speculation here feelings and you know concerns and hopes for the future and

[00:25:23] you know we share a lot so in it in any relationship with any other human being there's a there's an element of reflection and emotive decompression I don't know what do you want to

[00:25:36] use that yeah yeah what do you want to use exactly but so I don't I don't think that's unique to the human AI relationship is it no I think the comparison of the human AI and the

[00:25:48] human-to-human relationship there would be that it has to be bi-directional right if whenever we have conversations if it was just me venting to you about headaches in my life that would

[00:26:00] get old in a hurry right like if you have and I think we all have relationships like this where you know people who it always they always want to talk about themselves and when that happens

[00:26:09] it's like it doesn't work well in this domain that's sort of the assumption going in is that this is going to be about me replica not about you but I don't know I haven't interacted with

[00:26:21] the platform so you might have more insight on that one no I think it's designed to do exactly this right it's designed to um to attend to the whims and the needs of the human user

[00:26:32] and I think that's part of the the efficacy and the reason why people find it so engaging and so um rewarding to interact with with these tools at least on a super well I shouldn't say

[00:26:45] superficial at least on some level right because we're seeing the the positive results um even yeah and and they're kind of undeniable like it with suicide prevention that's that's significant well or even people saying that they think it improves their human-to-human relationships

[00:27:03] yeah that's a pretty uh undeniable improvement or benefit yeah yeah yeah but look the the researcher in me is a little skeptical not because I have any inherent reason to doubt what people are saying but it is self-reported right they didn't use any actual objective measures of

[00:27:23] the health of the quality or the um durability of human-to-human relationships of replica users it's just self-reported so we can take that with a grain of salt but some of the other studies are not using quite the same kind of content right so they're like they're evaluating

[00:27:42] supportive statements for empathy and constructiveness and accuracy and things like that um and and the independent judges are deeming the AI content more empathetic yeah so I guess my my my question to you is yes that's great these tools can produce output that

[00:28:05] is interpreted as as empathetic by humans but does that make them good replacements for uh for relationships with other humans so again to play devil's advocate here um I my natural inclination is of course not right is

[00:28:24] to say no but what if these people don't have those replacements right so in that replica case those people are all indicating loneliness surely in some situations they don't feel that they

[00:28:36] have a lot of people they can reach out to or who are not accessible to them and so then the question is in the absence of a human-to-human interaction does this proxy help does it have good outcomes yeah so do you think you know these sort of

[00:28:57] technologies are just starting out right that's replica is basically the first generation of of many more generations to come I'm sure of AI companions of different sorts that are just going to become more sophisticated more human-like more compelling to interact with

[00:29:14] do you think moving into going into the future in one five ten years the usage of these tools is going to be restricted to people who are lonely or people who are depressed

[00:29:25] or no it's going to become my sense is is that it's going to become more commonplace and everybody's going to use this in one way or another like everybody's using a phone today

[00:29:35] yeah yeah I think that's right and actually I one of the sources we didn't look at but reviewing this research made me think of it and I went back and found it

[00:29:45] there was an article that I saw a couple months back about how Nvidia you know the chip company was engaging with was partnering with an organization I'm going to look at up as we go

[00:29:59] so I don't have it at my fingertips but Nvidia the chip company had had introduced AI agents as nurses so generative AI agents oh Hippocratic AI they're partnering with an organization called Hippocratic AI to essentially develop AI nursing support and according to that one article the

[00:30:28] results suggest once again that the AI nurses outperform actual nurses in offering advice and offering information about potential drug interactions right so I think we're going to see an enormous explosion if nowhere else in health care of tools like this

[00:30:51] not just with empathetic responses but all kinds of health advice and advice direction and again maybe that's a good thing like one of the things I've been astounded by is just the difficulty of getting access to health advice in the U.S. My nephew has had a

[00:31:09] couple health challenges recently some things that you know he hasn't quite pinned down what's going on and you know he tried to get an appointment with a physician and they're telling him November they're telling him our first available appointment is in November

[00:31:25] I mean my god if it's between waiting until November to see a physician and getting on with a AI agent immediately who wouldn't yeah you know if accurate if we really are seeing

[00:31:40] evidence that accuracy is as good or better yeah why not but that's a pretty different domain the provision of hopefully factual information to the provision of emotional support and and the ability to display human-like empathy yeah it's different I don't know that it's

[00:32:03] categorically different um maybe maybe even fair enough but certainly things like even just mental health services counseling therapy since the pandemic a lot of people have moved to online therapy which seems to be working right psychotherapy online psychotherapy

[00:32:25] yep really so they see they see a therapist online wow and that's sort of exploded and I think you could see a lot wider adoption of these types of tools again if this evidence

[00:32:38] that we're seeing initially holds up one of the tools that we saw reference which I thought was just fascinating was this was sort of a design science study where they actually created this tool that they called Haley this tool it was pretty fascinating so it was basically for

[00:32:57] like advice seekers and peer supporters on a platform like reddit and they might have used oh this was the one that used that talk life platform so this is just people going on and

[00:33:08] seeking advice from others right and in this study they they they did it as a controlled experiment and a certain number of the participants the advisees the people giving peer support were given access to this tool and the tool would suggest modifications

[00:33:26] to their advice whether additional statements maybe rewording if they felt certain you know responses that the human was inclined to give were not sufficiently empathetic they would it would suggest you might consider rewording in this way and once again the the evidence suggests

[00:33:44] that the outcomes that were at least advised by the ai tool were deemed much more empathetic and constructive than those purely generated by the humans and and here again it's it strikes

[00:34:00] me as someone who you know maybe has a little writer's block and so they throw a prompt into chat gpt and say all right give me some ideas and they get some good ideas and maybe get some

[00:34:09] rolling this the same thing reading through it provided several examples for the types of prompts it would give and they seemed to make a lot of sense you know a good way to sort of

[00:34:19] improve the performance of the humans so this is one where the human ai collaboration really comes to the forefront so it's not just humans versus ai in terms of performance but it's humans using the ai tool collaborating with the ai tool to generate much better outcomes

[00:34:39] so can i take this in a slightly different direction for a minute sure so here's what i'm thinking about the use of these tools in organizations in management because i have no doubt that many organizations will and probably already already already are

[00:34:59] starting to implement tools that will help them understand how their employees feel at work and maybe even outside of work and so to me this this dovetails with and a transition that we've been i don't know if it transitions the right word but

[00:35:20] a dynamic that we've been seeing between two fundamentally different forms of management or management philosophy is that we and i have talked about them before in on the podcast one of them is scientific management tailors and the basic premise of tailors ism is that we

[00:35:42] that people should be treated as resources much in the same way that we treat any other material resources within the organization right and resources need to be organized in order to make things as efficient as possible we want to cut costs and maximize gains and revenue and profits

[00:35:58] right and so you know one way we can do this is by engaging in bpm for instance right business process management so we want to take down complex processes chop them down into simpler

[00:36:11] sub tasks and make sure that every sub task is performed in the best fastest most efficient possible way and once we've done this we can put them back together and recreate the more complex

[00:36:24] process in a in a more efficient manner but then the basic idea is that people are to be treated as resources like any other resource you know a machine a computer whatever it is that an organization possesses and then we have another management philosophy

[00:36:44] which is called human relations and the the most significant departure of difference between scientific management on the one hand and human relations is that human relations treats humans as fundamentally social beings with their own subjective

[00:37:03] feelings and experiences within the workplace and if we want to be able to manage people in effective and productive ways we have to take into account their thoughts and their norms and their beliefs and their values and their aspirations and their goals right we have

[00:37:23] to take these into consideration they're not just material resources that can be moved around in order to make things as efficient as possible now it may seem to be more humane perhaps because it takes into account this whole psychological dimension of the human experience

[00:37:38] that Taylorism largely ignores but many have taken it to mean that it can actually be used in more efficient ways to control what people do for instance by designing incentive systems to make sure that people act in ways that we want them to act right so like nudging

[00:37:53] that we've talked about before that's right for example so it doesn't necessarily mean that people are going to be more free in organizations for instance right or that they're going to be able to do whatever it is that they want to do or that they're going to be

[00:38:06] able to self-actualize to the nth degree that doesn't necessarily mean that so all this is to say that I think that there might be a cause for concern that the incorporation of tools like replica for instance or any variation on something like replica that might be more

[00:38:28] specifically tailored to organizations business organizations might be used in order to gain more profound specific detailed insights into people's psychological domain that we haven't had access to before interesting what do you think so I could see it I could see this tool

[00:38:50] being couched within either of those frameworks actually so I could see it actually having a very straightforward efficiency application right the example that we just talked about this Haley tool this Haley application helps people generate advice more rapidly better advice more

[00:39:14] rapidly right that's a straight efficiency gain I don't know if you use any AI plugins on your email but I'm thinking about doing it because I think I've already explained on this podcast

[00:39:25] that email is the bane of my existence I hate it it sucks up too much of my time well I was looking at these you know system generated prompts and I thought shit I need that on my email

[00:39:37] right so that rather than write the email all myself I could start to write it and then have it give me suggestions and I think it would dramatically increase my efficiency yeah and if you increased everyone's efficiency around email response alone the the efficiency gains for the

[00:39:56] organization any organization would be enormous yeah although I would say just to push back on this this doesn't strike me as I'm using AI to express empathy necessarily right right you're

[00:40:11] right I'm extending it a little bit but it is a clear in that domain it's a clear efficiency and effectiveness yeah can I give another example I believe that some organizations have

[00:40:23] started using similar AI plugins in call centers yeah yeah actually so I think we talked about that just a little bit before and the one of the big upsides as I recall was that it in customer service domains it increases everyone's performance a little bit but it

[00:40:44] particularly increased the lower skilled people so the more expert customer service representatives it didn't do much for them but the lower skilled or newer customer service representatives it increased them up to almost the level of the more expert customer service representatives which

[00:41:00] again from an outcome perspective or an effectiveness perspective that's a huge gain you see that exact same data in these studies right exact same outcome in these studies with regard to expressing empathy so in that um the case with the peer supporters the biggest

[00:41:16] gain is in people who self-report that they sometimes have a hard time generating advice right or generating responses the people who are already who say I can generate responses pretty

[00:41:27] well they see a slight gain the people who say sometimes I have a hard time thinking of what how to respond they see a huge gain yeah and so in both cases what you see is this leveling up

[00:41:38] of the lower level people to almost the expert level yeah yeah but I want to pull you back to my original question because I feel like we've moved away from it a little bit so

[00:41:51] you were starting to say that you see how these technologies can be used in both ways or within both of these philosophical domains of scientific management and human relations I want you to say

[00:42:00] more about how you see these these tools being used um let me rephrase this on the human relations piece well yes but I want to I want to know if you if you share my my concern that there could be potential potential pretty sinister misuses of this technology

[00:42:20] yeah so I think you have dark colored glasses in general right uh meaning you you uh have a tendency to see the potential misuses and I think it's a perfectly valid concern yeah so I do think it's a valid concern again my whole pre-sep

[00:42:39] position coming in is that I would be sort of inclined against the tools in in very human domains the evidence to me doesn't necessarily highlight that that potential for control that doesn't mean that it's not possible but I'm not seeing that here so okay let me let

[00:42:59] me um say this I think if I'm a manager and I'm reading these studies or listening to us uh I think there are some obvious ways for real gains to be had from using tools like this

[00:43:13] right because if I'm managing a larger organization with a few hundred employees and like any manager or any good manager I want to make sure that my employees are happy, engaged, motivated, satisfied you know all these are really important parameters that I

[00:43:29] want to maximize as much as I can as a manager and if I can use a tool like this and have my employees interact with this tool and gain support because many people

[00:43:41] we talked about college kids before well the college kids of today are going to be tomorrow's employees if 50% of them are lonely or unhappy in some way then yeah fuck yeah give them access to this tool and make them happier by whatever percentage point why not yeah that's

[00:44:01] sort of where I'm leaning right now you know it seems like it could be a powerful tool for enhancing you know employee satisfaction again if we take some of these report some of these reported outcomes at face value we would say it could even enhance their

[00:44:26] interpersonal skill right it could improve their empathy toward toward co-workers or the empathy expressed between co-workers as sort of a feedback mechanism or a tool for improving those human interactions if so why not yeah so I know there are many workplaces that already

[00:44:47] give access to psychological support or mental support to their employees but I can only imagine that current ways of doing this are difficult to scale and they're expensive because counselors are expensive yeah now I don't want to put anybody out of work but if I'm a

[00:45:07] manager and I'm thinking if I have a few hundred people working for me why not put a platform like this that's readily accessible to everybody at a fraction of the price of what I used before

[00:45:20] or where the outcomes are almost guaranteed based on what I've seen in this research I mean that's that is almost a no-brainer proposition if I were to put a very positive spin on this so the only question I would raise is how employees would react to it

[00:45:38] you know so it's the type of thing that you would definitely want to sort of pilot or test with a subset of folks because I could imagine if an organization or managers with an organization would say we're willing to sponsor this that employees might say oh you

[00:45:53] want me to talk to a robot yeah but I wouldn't make it mandatory right no of course not not mandatory but even there I could imagine people saying oh you don't want to spring for me to

[00:46:04] talk to a therapist you're going to have me talk to a robot or something like that so I think the question of how to introduce it and maybe to test within an organizational setting

[00:46:16] would be a good one so I don't want to I don't want to come across as being offensive but you're talking like a 50 year old man and I should know because I'm one as well almost

[00:46:28] I think coming out of college as a 21 year old today I think the perceived obstacle to using something like this would be I would guess almost non-existent in comparison to what you and I perceive to be the difficulties we perceive or the emotional or psychological

[00:46:52] obstacle that we see there as something might be that might prevent us from using something like this because it's so different to what we and I grew up doing or thinking about right

[00:47:03] yeah it's a good point it's a good point particularly if you frame it in the way that replica just going back to that example frames it where they specifically argue the value proposition around all you know constant access non-judgment

[00:47:20] and just and to add to this I think it's very possible and you don't have to be very to have very deep pockets to be able to do this you can fine-tune the model

[00:47:32] to the specific context of your own organization or business so as to give people more specific advice about how to behave within the given organizational culture that you have in your in your business you know yeah yeah and that that can actually be a very powerful tool I

[00:47:47] would think yeah I agree with that again not necessarily in the well I guess it still could be advice granting I was going to say not necessarily in the advice granting domain but could still be sort of that advisory piece right if people want to get additional insights

[00:48:05] or support in what they do within their professional domain it could be very valuable now to put my um my dark shade glasses on again I do think with the increasing potential there's also increasing potential for misuses of these or these technologies so you know when people

[00:48:28] interact with these tools for instance they divulge a lot of personal information well what's going to happen with it who's going to have access to it yeah well that's where I think clear policy safeguards would have to be put in place right

[00:48:42] it has to be very clear that the the the firm cannot access people's private interactions with any given platform even if they're paying for it and so absolutely that would have to be there

[00:48:56] but also the inputs into the system so that you know there would have to be policies put in place that say the managers can't essentially rig the system to respond in ways that they find

[00:49:08] most desirable I agree these are important criteria but when you look at the technologies that are used today we don't see these criteria implemented in many cases so you know in the wake of COVID for instance many people have started working remotely and so you have all

[00:49:27] these collaboration coordination platforms that have popped up that people use to be able to work with other people without being geographically present in the same place and many of these platforms are very invasive in terms of the amount of detail that they

[00:49:43] collect about you and data that they collect about employees when they're working from home including having access to your laptop camera and keystrokes and all the rest of it right so they're very not very privacy tuned if you know what I mean yeah and also

[00:50:02] I mean you know that we're talking about empathy and we there are many technologies that that organizations use to tap into people's sentiment and how it varies throughout the work day by listening to people's tone of voice for instance so all these tools exist and managers

[00:50:22] organizations use them so I think it's good that we say that it's obvious that managers shouldn't have access to people's conversations with replica like tools I think that's you and I agree on this I just hope that many others agree about this as well because if they don't

[00:50:40] then like I said before the potential for misuses is is pretty significant yeah and creating opt-out ability I think would be pretty key as well yeah one side note this made me

[00:50:54] think of have you seen a show called Mrs. Davis it has a super pot it's a story about a super powerful AI called Mrs. Davis and it's it's kind of funny it's got some really really

[00:51:10] funny elements and it gets a little deal well it gets quite theological at points which I didn't love oh I thought that's why you loved it no no well not not the angle they go with

[00:51:20] things but but nevertheless I thought it was a really thought-provoking show because it left me it had me sort of contemplating if a tool like this existed is would it be a good thing

[00:51:31] or a bad thing right and and it's I think there's a certain ambivalence that it generates where there's definitely the impression if you're sort of with the heroin you would say no it's definitely a bad thing but then there's other aspects that make you think huh maybe it

[00:51:48] wouldn't be a bad such a bad outcome so just a side note I think it's I think it's an interesting show to to drive some reflection on that front is the basic premise of the show

[00:51:59] similar to the movie her or ex mac it's definitely not ex machina it is it is this all-powerful ai that has become sort of widespread within society everyone knows similar to what we were talking about earlier everyone knows it's an ai but they become dependent upon it right

[00:52:23] and the one of the clear downsides is the sense of dependency but it is similar to some of the things we've talked about it does create better outcomes for people like at a societal level it improves people's outcomes across the board so it's this wrestling between control and

[00:52:41] agency right one having everyone having their own agency and the utilitarian concerns about better outcomes in society and it's and it's kind of funny it makes me think of something you and I have talked about before the movie castaway where tom hanks is on a

[00:53:04] deserted island and he founds a volleyball and he paints a human face on the ball and the face becomes his friend wilson right yeah i think we have this innate need for companionship exactly i think in any relationship that we have even with not even obviously with humans

[00:53:24] we and that's you know one of the most profound ironies of being human is that even though these relationships help us expand our horizons and our hearts and our minds in very profound ways and we become human through these relationships because if we don't have them then

[00:53:45] you know one of the most severe punishments that anybody can get is to live in isolation right isolation right absolutely so that's what makes us human beings but the irony is that when we engage in these relationships but become highly dependent on them

[00:53:58] mm-hmm like if you lose you know god forbid your your kids or your wife you know that's one of the most horrible things that could happen to a human being so any profound meaningful

[00:54:11] strong relationship with how we have with a human being is is in a way both a blessing and a curse yeah but when you extend this to an ai the thing that that that causes me

[00:54:25] some concern is that we don't have somebody who is on equal moral ground with us engaged in this relationship in some real way it's contrived and it's being i don't know if controlled or manipulated is the right word but it's being managed by a company in the end

[00:54:47] yeah that's there to make money right right so from uh the it raises some even with all the positive outcomes that we've talked about before and i'm not denying them i think there's

[00:54:59] still an ethical question to me anyway is this the right thing to do from a deontological yeah an existential question ultimately right but actually let's use that wilson example wilson is what enabled that character to maintain his sanity right and it was totally contrived

[00:55:19] right entirely but it was self-contrived right right but is is this any different are some of these tools any different now i guess here we're talking really about things like replica and not necessarily just the empathy generating response mechanisms but

[00:55:34] yeah it is it is self-contrived the question then is are the outcomes worth it yeah but do you think there's a difference between okay so wilson is an extreme example because there was no other human to reciprocate the the feelings and the

[00:55:52] the conversations right it was self-contriving that way uh it's different in human human relationships because you actually have another human being who who genuinely um reciprocates hopefully right but it's it's contrived with an ai companion in a different way

[00:56:10] in that you don't make up the the answers for the other party yourself somebody else makes up the answers for the other party and that's somebody else is a corporation well yes and no

[00:56:23] it's an algorithm right i don't think we're i mean at least in in that particular case i don't think we're seeing active generation of the content to pursue the ends of the algorithm

[00:56:36] now is that potential there sure right this is why tick tock is being pursued and you know threatened with uh shut down in the united states if the chinese communist party party continues to hold a controlling interest in it because the premise is that they could then

[00:56:54] set the priorities and it is an addiction engine right so social media every social media platform exists to generate and maintain addiction so the potential is certainly one that we should be cognizant of right yeah to me it's a more the the ethical question is more

[00:57:18] pointed here because this is a domain that specifically at least in the case of replica it's the main that specifically targets our psychological needs and i guess social media platforms do that as well but it's not as direct and pointed as in something like replica

[00:57:36] but yeah i mean i don't want to sound like a like an anti-capitalist or something because i'm not i i mean i do think it's perfectly possible and we have plenty of examples to

[00:57:48] demonstrate this that corporations that are there to make money do very good things in the world with very positive outcomes for millions and millions of people so i don't i i guess it's maybe

[00:57:59] it's just a novelty of the domain where we have you know these very personal interactions with an entity that's that's not human that i i'm i still wrestle with and again like we

[00:58:11] said before maybe it's a matter it's a thing i'm sure it's a generational thing as well yeah yeah i think you're probably right all right shall we bring things to a close then absolutely yeah good discussion yeah it was a good conversation

[00:58:28] um i hope people went away with with some fresh thoughts about empathy and ai and we'll have again soon