Analytics Anecdotes - Episode 8: Medical Ethics and Data Ethics

In this episode of the podcast, Tony is speaking with Dr. Casey Rentmeester. 

Casey is the Director of General Education and an Associate Professor of Philosophy at Bellin College.  He  specializes in ethics, is the author of the book "Heidegger and the Environment", and has numerous other articles on philosophy. In Tony's conversation with Casey they discuss ethics and how it applies to Data and AI. 

If you would like to connect with Casey, you can find his LinkedIn here.  

Medical Ethics and Data Ethics

You can listen to the podcast below, or follow this link to listen. Don't forget to subscribe and like us on your favorite podcast app!

 

Transcript

Tony Olson 0:02
Welcome, everybody to the latest episode of analytics anecdotes. I'm here with Dr. Casey Rentmeester. Casey, thanks for joining us today.

Casey Rentmeester 0:10
Thanks for having me.

Tony Olson 0:11
So, Casey, it's a Sunday afternoon here one of those bright summer or spring Wisconsin days where we are right after like, you know, yard work and stuff like that. So we're both having a drink. What you got there? What's on tap for you?

Casey Rentmeester 0:29
So this is a buzzy blonde. It's a coffee ale out of Badger State. So pretty tasty. How about you?

Tony Olson 0:36
Myself? It's a it's an aged rum. I'm into aged rums. So I have a 12 year old Zaya mixed with just a little bit of Coca Cola.

Casey Rentmeester 0:46
There you go.

Tony Olson 0:47
Well, thanks for coming today. You know, I think it'd be really great for the audience. Can you go ahead and just introduce yourself, tell a little bit about about, about you and your background?

Casey Rentmeester 0:57
Sure. So I'm kind of one of the rare academics who found his way home. I grew up in Green Bay, Wisconsin, went to St. Norbert College, basically to play football. But then I, I found philosophy while I was there, which is what I ended up majoring. And then after my undergrad degree in philosophy, I moved to China for a summer and just learned a lot about Eastern philosophy. So things like Buddhism and Taoism came back, I went to my master's program at Kent State University in Ohio, and then moved down to Tampa, I got my doctorate at the University of South Florida in 2012. Then I got a one year Visiting Professor gig up in Alaska. So I moved up to Alaska, all the way up from Florida, which was insane.

Tony Olson 1:43
I'm sure that was a drastic jump.

Casey Rentmeester 1:45
That was quite quite the car ride. We'll put it that way. So I just kind of went up there, I knew it was a one year gig. So I just brought basically books and clothes. So I spent the year hiking and hanging out up there, got a job at Finlandia University up in the UP the Upper Peninsula. So I started a philosophy program there. So I spent four years up there. And then the job that I have now came up, this was about three and a half years ago, four years ago. And I'm the director of general education, and an Associate Professor of Philosophy at Bellin College in Green Bay, and Bellin College is a small Health Sciences school. And so I kind of run all of the non program classes. So I'm in charge of the sciences, social sciences, humanities, but then I teach all the philosophy classes as well.

Tony Olson 2:36
Awesome. Thank you. That's, it's quite the background. So now so now you're in it, you got your doctorate in your teaching philosophy to young learners now.

Casey Rentmeester 2:47
Yeah, I got to where I wanted to go. Yeah. It took a little bit.

Tony Olson 2:51
Yeah, I suppose it probably takes some time. So um, tell me a little bit about you know, when we were first two, were talking about this and say, Hey, this, I think there's some there's, there's some good topics that you can bring to this podcast. We were talking about your background, and now in medical ethics and teaching those, especially with your philosophy, expertise. And, and you're telling me a little bit about them. And then, you know, making we're making a lot of similarities between the challenges that are that are going on in the AI data science world, AI ethics. You know, and I thought it may be really good to start off with, what's the medical profession doing from an ethical perspective?

Casey Rentmeester 3:31
Sure.

Tony Olson 3:32
And we can go from there.

Casey Rentmeester 3:33
Yeah, so medical ethics really stems all the way back to ancient Greece. And back then. So this is like 2500 years ago, you have the Hippocratic oath, which most people know, like, do no harm is one of the big things in the Hippocratic oath. And you also have things like respecting confidentiality of your patients. So this is a pretty long standing tradition. It's only recently so the past 50 years probably since the 1970s, that we started to really emphasize thing, things like patient autonomy. So the right for you to choose your own decisions regarding your health. The idea that you can get a second opinion comes out of this, the idea is like informed consent to know what you're getting into before you consent to it come out of this. But all of these sorts of things are kind of backed by this principle, that you as a human being have a right to choose your life in accordance with your interest. And this is something that philosophers have talked about for aeons, right? So now, when you talk about medical ethics, you kind of have four principles that guide those sorts of things. And the first one is autonomy, which just in Greek auto means self, nomos means law. So you you have the right to make your own laws about your life, your own decisions about your life. So we respect patient autonomy before really the 1970s there was more what's called paternalism where the doctor just told you what to do and you didn't have much of a say You just follow doctor's orders. But these days, we're in the era of patient autonomy where you have a say. So that's the first one is autonomy.

Tony Olson 5:06
Do you know what changed there from an autonomy perspective from the 70s? To nowadays?

Casey Rentmeester 5:11
Yeah, there's a famous case, Spence versus Canterbury. So this happens in the early 1970s, where a person His name is David Canterbury, he gets in a basically, he's he's a surgeon, this guy comes in, who essentially is having back pain, and they're going to do a back surgery. Right? And so the surgery seems to go fine. They don't tell him afterwards, though, that he should not try to go to the bathroom by himself, right? So he gets up after the surgery, post surgery to go to the bathroom, he falls, right. And he ends up becoming partially paralyzed for the rest of his life. So the argument was that the doctor did not communicate those risks beforehand. And he didn't, he admitted that he didn't. But this led to doctors being a little bit like, Oh, we better watch our backs. So we don't have litigation. And this leads to things like informed consent and all these laws really in the 70s. It goes, it goes crazy.

Tony Olson 6:12
But it's really interesting. I didn't know that. It's a good it's a great background. So it took but it took the courts and then it took it took liability really it really drive that change.

Casey Rentmeester 6:22
It's good. It's a good example, where, like, law codifies ethics, right? So you have these ethicists who have been talking about this for a long time. I mean, most of this autonomy talk comes back to a guy named Emmanuel Kant, who was an enlightenment thinker. So we're talking like late 1700s, early 1800s. But it took really some litigation for the law system to say, yeah, we need to get something in place here so that you're not going to be liable for those sorts of things. And we then we check these protocols off, right?

Tony Olson 6:53
Oh, that's so interesting. So autonomy was the first so...

Casey Rentmeester 6:59
right. I mean, so there's three more, yeah, three more so and the second one is beneficence. And that you've got the word benefit right in there. Right. So beneficence just means to do good. So you should aim to do good, the most. So in other words, you aim to do good for your patients, not necessarily only for yourself, if you're a medical practitioner. The third one is probably the most famous one, which is non maleficence. And this means first do no harm, right. So whatever you do, some some doctors will say, whatever you do, at least don't harm your patient, don't leave your patient worse off than they were that when they walked in, right, so that's non maleficence. And then the last one is social justice. So you want to have an eye towards making sure everybody has a decent shot at a good life. And you're not basically biasing certain populations. So those are the four.

Tony Olson 7:49
So I'm going back to the the, you know, doing good. From a medical in the medical world is there is you know, where, you know, the autonomy is very, there's legal ramifications there. Yeah, you know, doing good and always be doing the best interest of the patient. And everything I'm sure that gets complex is it was liabilities or legalities that did that drive that second concept too?

Casey Rentmeester 8:17
Well, really, that second concept goes all the way back to the Hippocratic oath, right. So the idea was you kind of when you when you shift from being a student of medicine, to actually practicing medicine yourself, you take this oath, and you it's like a vial of public vol, where you're going to practice in a certain way. And you basically would say, Look, I'm not going to practice in a corrupt manner, I'm going to try to do what's good for the patient, what's best for the patient. Right. So that's been there for a long time. But the idea that now I want to ask the patient what they want that autonomy piece that's new. So that's something that really the litigation drove.

Tony Olson 8:58
Interesting.From the Hippocratic Oath perspective, it you know, and you know, that whole beneficence

Casey Rentmeester 9:04
Yeah

Tony Olson 9:04
Did they? Is that a kind of self regulated? You know, like it Yeah. All the doctors are saying, Yeah, I'm gonna do this. And there's no, there's no follow up if you don't, or is there repercussions if you don't?

Casey Rentmeester 9:17
So, doctors are pretty big on professional autonomy, right, which means they want to be able to tell, they want to, they want to call the shots regarding how the practice, right? They're the experts, they don't want big government coming in and saying you have to do things a certain way. Now, at the same time, recently, the government had they have some mandates, like for instance, you have now electronic health records, right? That wasn't just a matter of, well, this might be more efficient, as opposed to actually writing down things on a patient chart, let's put it in a computer so it's more easily accessible, more legible, all those things, right. That wasn't a decision due to the efficiency sake that was mandated by the government. So as much as Professional autonomy is a big deal in medicine and doctors really covet that sort of thing. There are there are laws in place, obviously, that that regulate some of this stuff. And every big health system has an ethics board, right? And I serve on one of these, where if something's a little bit shady or gray, it'll go to the ethics board. And then we'll kind of talk through, well, what happened here? What's the patient perspective? What's the health care systems perspective? And we'll talk through, like, what should happen, given what we know.

Tony Olson 10:30
That's super interesting. So that's done by each health system, then as the board so there's some, there's it's self regulation a little bit.

Casey Rentmeester 10:38
It is. And usually on those, you'll have a lawyer, you'll have the token philosopher like me, and then you'll have the token. That's pretty much what it is. And then you'll have like, your, your, maybe your chief nursing officer, your physicians, right. So you'll have some of the administration covered as well. But certainly, you want all these different perspectives, so that it's not just like, let's just trust the position, that doesn't really happen anymore. Right? If there's any gray area goes to something bigger.

Tony Olson 11:07
Interesting. Okay. I think the non maleficence is pretty, pretty self explanatory. Obviously, some legal ramifications on that when also.

Casey Rentmeester 11:17
Absolutely right.

Tony Olson 11:18
Yeah. How about the social justice? I think that's an interesting one. You know, do you do you have any historical context on that one?

Casey Rentmeester 11:25
So in 1989, we passed this law of the United States called EMTALA, which basically says you can't turn somebody away from from an emergency room. So regardless of whether or not you can pay for it, for instance, you have to if somebody comes to a nonprofit, emergency room and hospital, right, you can't turn them away. Right? So if they say, I'm here, you know, I know I can't pay for this. But will you treat me You have to say yes. And if you don't say yes, if you turn them away, there are massive ramifications. So the physician gets charged $50,000, they get fined? And so does the health system. $50,000. So you don't, you don't want to do this, right. So, of course, the issue is, if they can't pay, who's gonna end up paying, it's the people who are actually paying and that's why prices keep going up and up and up, right? So, so EMTALA is really one of those one of those things where we are ensuring there's some social justice here, right? Everybody has a right to at least be treated in an emergency setting. Even if we don't have the same sort of universal health care that you have in like European countries, for instance, we at least have something like that. But the it has its own issues as well.

Tony Olson 12:38
It's interesting that social justice, which can be such a broad topic, because really down to, you know, comes down to you got to treat somebody, er, it's one, you know, in the ER, it's one, one piece of legislature, if you will.

Casey Rentmeester 12:49
yeah. Right. So really, if you think about social justice, Justice typically means fairness, right? And you want to make sure that everybody has a fair shot at a decent life. And at least when I teach this stuff, I try to argue that even a four year old in the playground understands fairness, the very basic concept, right? If I give, I've got a four year old and a five year old at home, if I give my five year old a piece of candy, and I say split this with your brother, right? And she takes three quarters and gives him a quarter of it. He's gonna know that he gets that that's not fair if he sees the whole bar. Right. Exactly. Exactly. That's a good caveat. Yeah. But so the idea is, this is such a basic concept that you just you understand it immediately. Whether or not things are fair, whether you get a fair shake. So the more I mean, not that healthcare is necessarily completely fair. We all know that that's not the case. But there are some laws in place that promote fairness to some extent.

Tony Olson 13:51
Right. Interesting. Yeah. You know, fairness, when you think about, it's just a microcosm of just the ER, there's a lot of other levels of fairness I'm sure to healthcare, right. No doubt. Yeah. Right. Yeah. Same regulation from there. How does this That one's done was more of a find them legislated? Yeah, okay.

Casey Rentmeester 14:10
Yeah, yep. So, and there are other things in place as well. So like, it used to be the case, for instance, that physicians could get kickbacks off of prescriptions. So they would peddle these certain prescriptive drugs because they knew they were going to get kickbacks money for it. They don't allow that anymore. So So they've they've kind of stepped up their game with making sure that for instance, Big Pharma, they still have a ton of power, but they have kind of limited it more in recent years, I guess.

Tony Olson 14:43
Hmm, super interesting. Yeah, that was a great overview is you know, from a medical ethics perspective, before we shift to like applying some of these concepts, maybe more to the data science and AI, AIrealm. Is there any other comments or any, any other thoughts that you have? on there that we didn't touch on in this first part of the discussion.

Casey Rentmeester 15:03
Yeah, I think the biggest thing, because I teach at a health sciences school, the biggest thing that I try to stress is ethics is more than that, which is what's legally permissible. Right? So ethics and law, although they're linked ethics is a much broader sphere, right? So there might be some things outside the realm of, for instance, every every profession in medicine has a code of ethics. But there are so many things outside of that code of ethics. That is where the ethics really lies, like how you treat your patients with compassion, with empathy, all those things. You have those possibly in a code of ethics, but when it happens in actual context, right, this is something that it takes experience, and it takes skill to do it well.

Tony Olson 15:53
From a from a health system perspective. You know, are they're not all nonprofit, right? Like there are for profit organizations, correct?

Casey Rentmeester 16:02
Yep. We have both.

Tony Olson 16:03
And they've already implemented these types of ethics programs, then obviously,

Casey Rentmeester 16:08
yeah. Whether or not you're for profit or nonprofit, you'd have an ethics definitely, yes.

Tony Olson 16:13
So it does exist in the for profit world?

Casey Rentmeester 16:15
Absolutely. That's Yes. Yep.

Tony Olson 16:19
Any? Do you have any opinion on like, is the same level of ethics applied in for profit, and nonprofit health care? Or do you feel that it's our ethics has been pretty well established inside of the healthcare community, that that is equal no matter what way the organization is structured.

Casey Rentmeester 16:39
I think it's been around so long, for instance, the AMA code of ethics, the American Medical Association code of ethics, it's been around so long, that it doesn't really matter whether you go to a nonprofit or a for profit, you're going to get, you're going to, you're going to get a similar experience, honestly, now, it is true that you know, the month the business model is going to be different, right? If your nonprofit, you get more of a tax break, because all the money goes back into the entity, the corporate entity, whereas if you're for profit, you can you can actually make money. Right? So it's a little bit different in that regard. But in regard to, like, what are the what are the ethics of care? It's going to be the same.

Tony Olson 17:23
Gotcha. And then the application of the ethics of care is going to be the same. Right?

Casey Rentmeester 17:29
Exactly. Right.

Tony Olson 17:32
What what level of like, ethical responsibility to individuals have at these organizations? You know, are there? Are there ever gray areas? And maybe that's why you have a board, but you know, are their gray areas? And if so, like, how do you navigate them?

Casey Rentmeester 17:46
The well, ethics is all great, right? I love the gray, the black and white does not let's it happens to some extent, like should you murder an innocent person? Of course not. So we have some black and white. But when ethics is interesting, it's great, right? So essentially, for instance, let's talk about maybe somebody with a terminal diagnosis. Does the doctor tell the truth in that sphere? Now, for a long time, we had what's called professional privilege, where it was kind of a doctor's call, right? So they could they could basically, usually they would be, they'd be kind of like, they practice in the person's home, they'd have their family doctor essentially, right? And they would understand whether or not that person could handle that information. Okay. Nowadays, we don't, we don't really appeal to professional privilege as much anymore. The idea is, if it's gray like that, it might be Well, let's think about is this patient depressed already? Or is that patient in denial of their condition? Or like, can the patient handle the information, those kinds of conversations might come up, but you would definitely at least bring the family and and talk to somebody, right? And let them know the truth of the matter. So there's still grayness there. But a very basic ethical principle, like tell the truth needs to happen. Regardless of how it happens, it needs to happen. Now all of us, you can think about like your maybe a surgeon, sometimes they get a bad rap for being they don't have as good a bedside manner sometimes. So sometimes you'll get that sort of perspective. And they're just brutally honest. Right? I nowadays we practice interpersonal communication in medical school, so that it's not a matter of just like brutal honesty, like there's a certain delicacy to it, and you have to approach it in a certain way.

Tony Olson 19:39
That and I think that's really interesting, especially as you take that concept of the gray areas and communicating the gray areas to someone requires probably conversation. Yeah, and context and transparency and then also consideration Right, right. So you You know, shifting to today's challenges in ethics and in data science and AI? Yeah. You know, I click Accept terms on everything right?

Casey Rentmeester 20:11
Well, most people do.

Tony Olson 20:12
Right, like, do you have any ideas or thoughts on how like that same ethics, translate transformation or discussion can happen? Like, how do you get how do you transfer the same way that it's been handled the medical industry to the digital industry?

Casey Rentmeester 20:29
Well, I think the first thing, the first conversation that needs to happen, from professionals in that sphere, are what principles need to guide this conduct, right? So in medical ethics, it's pretty basic, like autonomy is going to be incredibly important. I think that might be a principle that data analytics can also think through, right? So when you do click Accept terms, right. And all of a sudden, your Facebook account is linked in with all these other things, or your LinkedIn account, whatever it might be,

Tony Olson 21:01
and you're getting shown contents based on your data,

Casey Rentmeester 21:04
Correct, which we all know how this will this happens, right? you clicked on this link, for instance, I was my my wife is looking to go back for her master's degree, right? So I'm researching master's programs for her. And all of a sudden, now I'm getting all these ads from these schools. Well, I have no interest in going back to school. They have been, the algorithms don't know that. Right? So to some extent, the question is, do you have autonomy over who gets that information? And a lot of the times when you just blindly click Yeah, I'm fine with that, or you text this number, and then all these other people get access to your number. The question is, shouldn't you have autonomy over who has that information? Especially if you think that all those clicks and whatnot, are, to some extent, an extension of yourself and your interest? Right? So obviously, there's power in knowing what consumers want, you can then manipulate them towards buying your product. Right. But I think the first question needs to be what are the principles that need to guide this from an ethical perspective, I would venture to guess that the beginning point needs to be autonomy, and probably non maleficence. Right? You don't want to, you want to don't want to do harm to consumers, right? Because if they find out that you did them harm, even though it's all about profit, right, but if they find out that you did them harm intentionally, that's not going to be good for business, either.

Tony Olson 22:28
Maybe you sold them a more expensive product, rather than they could, you know, they could have gotten the same product cheaper somewhere else.

Casey Rentmeester 22:33
And there's a huge thing going on right now in medical ethics, where you have doctors routinely ordering CT scans, which are more expensive than just your basic extra X ray, right, when the X ray would have been just fine to get what you needed to get. Right. And so those kind of conversations might have to happen in data analytics as well.

Tony Olson 22:52
That's super interesting. Did not know that. So um, tell me a little bit about so that. So autonomy, even in the medical field, that only came around because of liabilities and because of legalities, right.

Casey Rentmeester 23:09
Well, you had the much of Western history was very paternalistic, and that word paternalism just means it's got like pot tear right in there, which means like you're acting as a father does to a child, right? So that's kind of how you looked at doctors forever, where it was like, Look, they took their oath, they promised they practice ethically, we're going to trust them. We're going to know that just follow doctor's orders, right. But it did take some litigation to change that mentality. And now we're at the point where autonomy is the norm, like that is the era that we're in. But you're right that it took it took some missteps to get there. Right. Now, I don't know as much about data analytics. Have there been missteps? You guys would know that a lot better than me. Right. But it almost takes a break down like that, to get to get this conversation going.

Tony Olson 24:00
Yeah, I think it's safe to say there has been missteps. You know, and I think that the ramifications of those missteps that have unfolded, come forward. Okay. No, I don't think that I think that people are doing the work to get, you know, the fact out that there are missteps inside and, you know, ethically and how we use data. And obviously, you see Europe going that way, right, like GDPR and other AI additional restrictions that they're considering to put on or maybe not, or outcome considerations. Not restrictions. Yeah.

Casey Rentmeester 24:34
But that's becoming codified. Right, right. So philosophers have been working on this stuff for a long time. And then it takes a little bit before society says, Oh, yeah, maybe we should ideas always happen first, right? You have the ideas first, and then eventually, if they're good ones, they turn into policies. Right. So it takes a little bit for that to happen. The the GDPR if you look at that it's strongly about autonomy, right? And it's about You're not going to allow companies to manipulate consumer behavior, right? We want to respect the fact that people should have a say in regard to how their lives go. So that's heavily based on autonomy. Now, could there be an emphasis on social justice to as maybe one of these pillars for data analytics? In terms of ethics? I think that needs to be in the conversation. Right. But the question is, what principles first need to guide things? And then everything else kind of falls from there?

Tony Olson 25:31
You know, you the the comment that you made about, you know, the autonomy, and then like, digital information, being an extension of yourself really, like, I think that's really, you know, that's, that's at the core of the old GDPR thing. Right? Yeah. It's like that, that is an extension of yourself, and therefore, you should be in control of it. And and, you know, I think a lot of organizations, especially when you consider, you know, small businesses where half of the United States is employed by small businesses, yeah, no, they don't have the resources and probably aren't considering the autonomy of their of their analytics practices, or the their AI tools that they're buying, or, you know, those AI systems that they're buying and then utilizing,

Casey Rentmeester 26:11
I don't think those conversations have happened yet. I do know, among philosophers, this is something that people are deeply concerned about, right. And so for instance, I don't have a smartphone, right? Because I understand that every single clip that you have on a device like that is viewable by somebody. And there's power in that information, right? There's even somebody who like me who I really value critical thinking, I teach it, right? You're still we're not all like purely rational animals, right? We are run by instincts by emotions, all these other things that make me who I am. And I want to make sure that too, as far as as an extent as I can, I want to have control over my life. But these conversations I don't think are happening on a broader scale. Right. We've got philosophers who think for a living who are talking about it, but and maybe some other pockets of society, but I don't think this is a congress normal conversation yet.

Tony Olson 27:10
Right? And you know, I do think that AI ethics, and that's kind of the reason that we're having this conversation, right? ethical concept, ethical AI is definitely a topic. But you're right, it's not really being applied, at the same level is that it is being discussed, current is not even being discussed, probably at some sites, where maybe it should be where maybe we should be right.

Casey Rentmeester 27:32
But ethics is one of those things, I always tell my students this, it's one of those things that's inherently important, right? It people care about their lives deeply. That's what it means to be human. To some extent, at least that's what the person I focus in on Heidegger is a philosopher that study, he says care is at the core of our being, that's what it's all about, right? And if humans care about their their existence, and all those clicks on a computer or a smartphone are extensions of themselves, right, then, if you're going to take it seriously, you're going to be thoughtful as to who gets that information. Yeah.

Tony Olson 28:09
So we covered autonomy, and that, you know, some sort of applying that to the data science realm and analytics realm. Same with doing no harm, right. Like those. That's a very, you know, that's a very easy statement to make, you know, the beneficence statements kind of interesting to do good, right? Yeah. Yeah, that's tough. And especially in, you know, for profit industry, how, you know, how do you see organizations? Or how would you suggest organizations? Maybe defining or even, you know, understanding what that means to their consumers, you know, or to their products as they build them?

Casey Rentmeester 28:45
That's, Yeah, great question. So, this is the most difficult one, even in medical ethics, right? And so you want to do good, that should be your intention to do your patient. Good, right. And a lot of the times, people, there's disagreement as to what that looks like, right? So one, one doctor is gonna say this, and other doctors gonna say that, but it's understood that at least in terms of your intention, you should be looking out for the best interest of your patient. Right? So the question is, when you shift this outside of medicine, and you're talking about consumer behavior, the bottom line is capitalism does not have a conscience, right? The bottom line is profit, right? But if you can kind of understand that ethical practice, and looking out and try not to manipulate your consumers might be good for business, right? That's where that tie in typically happens. So one of the realms that, like I published the book in 2016, on environmental philosophy, the the argument that I try to make is if you can show that going green is not only good for the environment, but good for business, right, you've got a better chance at getting people on board with that. So that idea of ethical conduct as long as you can tie it into Like, what if it's good for the consumer, but also good for the company? It's a win win sort of situation, right? That's that that gives us some teeth, I guess, if you're just gonna say, you know, try try to be a good person. Right. But it's, it's not in your best interests in terms of business, it's not going to go anywhere. So you kind of have to, there's a balancing act there, I think.

Tony Olson 30:21
So I have a really interesting question then specific to data, data science. And, I mean, at least looks like a really basic example. Yeah. No, you think about a recommendation engine for a toy for , you know, I got a four one on five month old, and I bought him two toys off off online. Got it. And now there is suggesting that I buy a third toy.

Casey Rentmeester 30:48
Got it. So Richard Thaler, who won the Nobel Peace Prize or not Peace Prize winner Prize in Economics, because it's nudging, yeah. Okay, this is nudging, you're basically taking consumer behavior that's already in the past. Yep. And you're kind of putting something out there and dangling it in front of you, that's nudging hoping that you jump,

Tony Olson 31:06
right. So so then Okay, so that is that is an algorithm behind that, that, that has said which, which nudged or which toy to buy next, and which toy to buy next. You know, I think that in these ethics that we lined out, you're talking about beneficence, is it good that I buy that next toy? Like, do I like, does he need that toy? Does my kid need that toy? Is it? Is it good for me a consumer to have that third toy? I just bought two, right? You know, like, do you get down to that level of detail in this type of and when you talk about ethics and data science?

Casey Rentmeester 31:40
That's the thing. So power is tricky. Right? And the freedom and power are linked? So the question is, are you freely choosing to buy that toy? Because that's what your son would like? Or is it the case that you're being manipulated? And that company has power over you to such an extent that when you see it, you don't even think about like Netflix? Right? They don't even give you the chance to think your comes the next episode? Right? So it's the same sort of thing. Like, aree you consciously reflecting on whether or not that is within the realm of what you'd like for your child? Most marketers know that that's not how humans work. We're not we're not rational animals. That's the most famous definition of human beings. But that's not how it actually works in context, there are a lot of things that that change human behavior. So I guess the question is, maybe, maybe it's nice, because that is legitimately within the realm of what you'd like to purchase for your child. And they're just showing you here you go. Right. But for for some people, that's not necessarily what happens in their brain. It's more so like, like, pleasure now, click. So it really, it depends, you have to be diligent, I think, as a consumer.

Tony Olson 33:04
So that's an interesting thing, because we're talking about ethics in AI. We're talking about a lot of the responsibility when you're building these algorithms have to, you know, exist with the constructor, right of those algorithms, right? And then we're saying here? Well, it's not really that, you know, it's not really up to the constructor here. It's up to the consumer to decide if you know if that's a good or bad and it hasn't that, if you're just if you're giving them freedom, or giving them the choice, can you wash your hands of the ethics as a creator of these algorithms?

Casey Rentmeester 33:41
I don't think so. Personally, I don't think so. And I think, for instance, what's happening in Europe is they're trying to say no, like, you can't, you can't just, you know, do it. It's basically in the us right now, with data analytics. It's the wild wild west, right? Basically, you can get away with a lot of stuff to the point of manipulation. Now, I personally think Europe's ahead of us in this sphere, right? So think about even like, I'm thinking this morning, my my children like sugary cereal, right. And they get advertised these sorts of things on on, you know, their devices that they watch. So that's what they want, right? So in Europe, you can't do that. You can't advertise with mascots, like, you know, well, think of any any cereal, you have that like a mascot, right? You can't have that sort of thing over there. Because they understand that children are not rational right. Now, am I trying to say that adults are not rational? No, but it's definitely the case that your behavior is not always guided by rationality. If you're allowing these AI structures, these algorithms, to basically dangle this the stuff in front of consumers, right? Maybe when they're not at their best, it allows them to make decisions that they wouldn't have made otherwise. We can't think of, we can't think of people necessarily, as always doing things that are in their self interest.

Tony Olson 35:08
So they're not always going to have the, they're not always going to have the freedom or the choice. Or you maybe you can't guarantee that or can't count on them making the right choice.

Casey Rentmeester 35:19
Well, what's really what's really interesting is this term interpolation. And this is where you think you're doing what's best for yourself, and you're doing things autonomous, autonomously out of yourself. But what's really happening is you've been manipulated to do things as somebody else wants you to do them. And that's the worry that I have basically where, and I've written a paper on this on pharmaceutical advertising, right? So pharmaceutical advertising says, Look out for your health, maybe you have these symptoms, you need this drug. Right, right, right. So you're acting like, I'm doing something great for my health here. I'm gonna I'm gonna go get this, I'm gonna go talk to my doctor. Yeah, right. But what's really happening is you've been interpolated. You've been manipulated by that advertisement to go pay for this drug. Right? And you might not even need that drug. So that's the concern, I think, with with AI as well. And maybe even you know, the person who coded that didn't ever think about that. And usually, there are way bigger consequences to this sort of thing than you think about initially. But maybe somebody should be in charge of, you know, whether or not there's, there's an ethical mentality towards this sort of coding? I don't know.

Tony Olson 36:28
Well, it's What's strange is we're I mean, we're scratching the surface on the types of AI that's being used out there where the recommendation engine, yeah, like, the most like, this is just the beginning. Oh, I was like, we could go into social justice, we could go into how algorithms abusing the justice system, and so on and so forth. And we're talking about just a recommendation engine impacting choice.

Casey Rentmeester 36:53
Exactly. And if you think about it more broadly, I'm sure many people listening to this have seen the social dilemma, right? You click on something that like leads to a conspiracy theory sort of attitude, and all of a sudden, you're, you're just bombarded with that sort of information, right? So you go down this rabbit hole, right? To the point where you can spend hours on the internet looking at stuff that you would have never chosen to do yourself.

Tony Olson 37:01
You'd have never searched for you would have never probably even found correct if it wasn't presented to you.

Casey Rentmeester 37:25
So there's power in that. Right. So George Orwell, in his book, 1984, he says it, there's a lot of power in taking people's minds and rearranging them so that they support your interest. And that's what, there's power in data analytics, that allows companies to do that sort of thing. So is it just up to the consumers to be vigilant? Maybe that's the way it is now. But it probably makes more ethical sense, at least, to have somebody looking over this and making sure that people are not being manipulated.

Tony Olson 38:02
Super interesting. Thanks for that discussion. Lastly, you know, we're talking about the conversion for medical ethics to data science, and yeah, I ethics was the social justice component. Yeah. Yep. When we talk about data science, and the way that algorithms are being used in, in the justice system right now, you know, I just, I don't even know what you what are your thoughts on that? You know, like, what are your thoughts on how, how that medical profession of social justice can then also be applied inside of data science and AI ethics? To make sure that, you know, when we're when, when we're creating things, that that's in the back of our mind also.

Casey Rentmeester 38:54
So a lot of the times when when I teach this, at least when I teach social justice for people in medicine, I talk about it from this perspective of fairness, right. So it's all about fairness. And the most famous social justice thinker in the past 100 years is a guy named john Rawls. And john Rawls argues that you need two things for adjust setup, right? You need equal rights and equal opportunity. So for Rawls, the biggest question is, how do we make sure that maybe the rich Boston kid with Ivy League parents, right, has a certain shot at a decent life with equal rights as the kid who's born in inner city, slums of Detroit? So how to deal with with maybe not many role models around them? Not much money, right? So how do we make sure both have a good shot right? In medicine, with something like EMTALA, which we talked about, right? At least you have a shot to if you get sick, you have a place you can go right I Don't really know, the realm of data analytics, like, what is the appropriate comparison for that sort of thing? But how do we make sure for instance, if your algorithm is saying, well, I only want to focus in on this zip code because they have the money, right? Maybe that particular product might be useful for somebody outside that realm? Is it fair for you to maybe geo fence to that particular area? If it's a product that could be helpful elsewhere? Right, it makes it makes total, like, profit based sense to do something like that. But is it fair? Right, those are questions that are, are are interesting, I don't have answers to but those are the kinds of conversations you need to have.

Tony Olson 40:44
Well, that goes back to the beneficenceto do it, right? Like if you get help others with your solution or your product? Do you have a ethical responsibility to that and mass make sure that message reaches reaches everybody, equally through data science, rather than constricting the message? You know, that message getting out there through data science?

Casey Rentmeester 41:07
And especially if you think about, like a company's mission statement, this will give you a sense as to what supposedly they're all about. Right? And if you can be thoughtful as to well, how am I living my mission? Right? Usually, there's some sort of talk towards equality and those sorts of things. Is your AI that maybe you've outsourced or whatever it is, but whatever it is you're using? Are you really supporting quality? through your measures of marketing, or whatever that looks like? Those are questions that probably need to happen.

Tony Olson 41:41
Right. Really interesting. Um, so one of the last questions I have here, it's, it goes back to the concept of medical board or ethics boards.

Do you feel that there is a place for that same thing to be applied inside of private? Or in public industry for profit industry that have been made? Instead of making it a data ethics board? Or you know, tell me your thoughts on that at all?

Casey Rentmeester 42:14
So whenever you talk about things like business ethics, like like that, right? Usually, you make a distinction between shareholders and stakeholders, and shareholders, we know what that means they've got stock in the company, but stakeholders are anybody who has an interest into how that company is run, right? Which would be consumers, it would be in, for instance, on ethics boards, a lot of the times you'll have a patient advisor in there, right? So in data analytics, if you think this through who is affected most by those, those corporate policies and practices, that needs to be the initial question, Who are the stakeholders, right? Those are the people that need to be at the table in regard to navigating the gray when things get a little ethically tricky. Now, I you know, this better than me, like who would make sense for that sort of thing. But certainly, that's where you'd start, who are the shark? Who are the stakeholders? And then let's get together and make sure there's not just one voice in that room. Right, the more the more sectors of society, the better, and that's why you have this and ethics boards.

Tony Olson 43:20
That is something I would have never thought about is Hey, actually having a customer sit on data ethics.

Casey Rentmeester 43:29
Otherwise, how are you going to get that perspective? Maybe you can set out some surveys, but it's better to actually have a person in front of you.

Tony Olson 43:35
Yeah. So much to learn. I feel like they're like, just like, even that simple suggestion right there for medical ethics, because it's been around for so long. Yeah. And taking opinion your patient on the ethics board? Well, maybe your customers should be on the ethics board for your data analytic solutions, or how you use data inside of your organization. Right. Right role or your product or what have you.

Casey Rentmeester 43:55
Yeah, and it might be good for business to actually because you might get a better sense, other than what the algorithm is telling you like, what is the person telling you? Right, right, you might get a better sense as to what they want.

Tony Olson 44:05
But you know, when you hear about customers being a part of product, like innovation, product service, right, like they customers, you know, they give good customers the first first crack at the product, right? feedback, those types of things. Yes, yeah. But I don't I've never heard of my personally never heard of probably does exist, is having a customer on data ethics board, because that's a very good the ethics of part portion of that is very different than your product creation.

Casey Rentmeester 44:37
Absolutely. Right. Yeah. And what's interesting is ethics is intuitive. It's not like you need a PhD in philosophy to understand these terms, right. And anybody is interested in it. It's just that's how it works. Right? So it's not you don't need a certain expertise to know what's right.

Tony Olson 44:53
Right. Well, the PhD definitely helped and helped today for sure. So appreciate it. Yeah. Any You know, this is all such a great topic, we covered a lot of things. Before we wrap, is there anything that you know, that you want to touch on or you want to talk about? Regarding ethics inside of the data science and AI and analytics world before we before we break?

Casey Rentmeester 45:16
Well, I just I think through some of the conversations I've had in philosophy settings of people who are, as I said earlier, deeply concerned about this sort of thing. So for instance, the top Heidegger scholar in the country, he'll he only writes on a typewriter, an old school typewriter, because he's concerned, some of these thoughts are pretty dangerous, right? He's concerned that it's going to lead to some negative sorts of things, right? A lot of my friends don't have smartphones. So so these are, these are things that I think need to be taken more seriously than the average consumer takes them. But the biggest starting point is just to get this conversation going in the first place and trying to think through what you actually wants out of data, right? And what we How can we use data in a way? That's not manipulative? That's the biggest question. Right now. We're just not quite there in this country. But those conversations need to start happening.

Tony Olson 46:19
Yeah, yeah, they absolutely do. And I think they are, you know, I think everybody's kind of on the edge of their seat for it to happen sooner and faster.

Casey Rentmeester 46:27
I agree. Yeah.

Tony Olson 46:28
Well, Alright, Casey, almost forgot to ask you the most important question and something we ask all our attendees here and all our guests. If data science and AI was a superhero, what would it be? And why?

Casey Rentmeester 46:46
That's a good question. I think, I think the answer is got to be Batman. Because Batman is kind of like this, this dark figure who is vigilant in the night, ensuring that things go the way they're supposed to go, right. But they're kinda in that he's kind of in the background. So that's kind of the figure that I think of where when you're building these algorithms and whatnot, like, let's make sure even though you're maybe in the background, you're not on people's like site. What you do matters, and you want to do things for the good.

Tony Olson 47:18
That's great. Appreciate that. Yeah. I think the first person that said Batman, so I really like that. There we go. We're going Batman. That's good stuff. Dr. Casey grant. Mr. Thank you very much for talking with us today. Appreciate your time and hopefully have you back. Maybe we'll have you back here again in the future.

Casey Rentmeester 47:35
Sounds great.

Tony Olson 47:36
Hopefully, as a part two.

Casey Rentmeester 47:38
Sounds good. Take care.