When people are in a life-or-death situation, they often make a panicked decision, usually motivated by self-preservation. But if self-driving cars take off, and we remove the human driver from the equation, how will those decisions be calculated on the road?

In this clip from Industry Focus: Consumer Goods, Sean O'Reilly and Vincent Shen look at some of the ethical issues around how we will have to program self-driving cars to react in potentially fatal situations, and how public opinion could affect the way this issue plays out.

A transcript follows the video.

A secret billion-dollar stock opportunity
The world's biggest tech company forgot to show you something, but a few Wall Street analysts and the Fool didn't miss a beat: There's a small company that's powering their brand-new gadgets and the coming revolution in technology. And we think its stock price has nearly unlimited room to run for early in-the-know investors! To be one of them, just click here.

This podcast was recorded on Feb. 23, 2016.

Sean O'Reilly: So, Sam M. wrote in, after listening to our crossover show between tech and CG, and he wanted to know ... let's see here ... "If there's a situation in which collision is imminent, who does the software computer try to save? The passengers or the pedestrians? Does the car try to save the maximum number of lives? The human driver would naturally have his own self-interest as a priority. But would a computer driving one passenger purposely endanger its one passenger in order to save four pedestrians? I wonder how the tech companies would address this. What do you think?" And that's just one example. I mean, we watched tons of videos and TED talks, and this is ... it's kind of crazy to think about.

Vincent Shen: I have to say that, for this segment, and having touched on this in the crossover, I have to say, for Sam and all our other listeners, we're probably going to have more questions for you than any answers at all. There are a lot of bright minds -- on the technological side and the regulatory side -- who are thinking about this, and the impact it's likely to have in the next five to 10 years as this technology becomes more mainstream, where the rubber starts meeting the road.

We talked, last time, about how the National Highway Transportation and Safety Administration released a letter in response to some petitioning from Google (GOOGL 1.42%) (GOOG 1.43%) basically acknowledging that, "Hey, if the car doesn't have any of the typical driver instruments, like the steering wheel, brake pedal, etc., the software itself could be considered the driver."

So, that leads to all these other questions, like you mentioned. How does the software determine what the right action is? Whereas, if it's a person who's driving the car, naturally, they're going to react, and make that split-second decision. And it's harder to fault them for that. It truly is just a panicked decision. But that software, assuming it could process millions or billions of inputs ...

O'Reilly: Then it's a reasoned decision.

Shen: In that split second, who's writing this? Who are the programmers who are writing this? What are they told? How are they told to program the software? Who decides that? So, a lot of these questions come into play that I wanted to touch on in the show today in our discussion.

A lot of people talk about, do we prioritize minimizing harm, or do we prioritize the passenger? And the thing is, even behind each of those questions, there are so many nuances. It's really interesting to think about, and also very difficult to try to wrap your mind around.

Ultimately, I think public opinion will play a big role in this. It's not even that simple. So, MIT Technology Review -- first thing I wanted to bring up -- they mentioned an experimental survey that was conducted in France with several hundred participants. And the result leaned toward people's preference being to minimize harm in the software, to minimize harm or death toll from accidents, because ultimately, that's what people say is so great about this technology's potential, that it can reduce traffic fatalities.

The thing is, that makes a lot of sense and I agree with it. But then, the question is, that's also followed up with, someone sitting in a traditional car might feel really comfortable if they're surrounded by autonomous cars that minimize harm. But what happens if you're sitting in one of these autonomous cars, and you know, for a fact, that if the situation arose, the software would sacrifice you in order to save ...

O'Reilly: I might not get in the car. (laughs) 

Shen: Exactly. So, we have this challenge now of not only figuring out this moral dilemma, but you also have the challenge of making it, I guess, appetizing enough that the consumer market will actually adopt the technology. Because ultimately, people have to buy these cars to put them on the road. And, even minimizing harm isn't clear. There are a lot of scenarios posed in the research we did. One, for example, saving six people vs three people is still not an easy question, but it seems like it would go toward, again, minimizing loss of life. But what happens if, those six people are all octogenarians ...

O'Reilly: Oh gosh.

Shen: ... and the three people is a mother and two very young children? How does software make that kind of distinction?

O'Reilly: Right.

Shen: And it's almost unfair, in a way, to ask anybody to have to program this software to have to think in that way. So, its just a very challenging question indeed. And, like I said, we can't give you the answers, but we can touch on the fact that, I think, going forward, cybersecurity, privacy, safety, tons of testing, trial and error, are going to be things we hear about constantly with this technology.

O'Reilly: It definitely seems like public opinion is going to play a big role in what happens.

Shen: I agree. The survey kind of shows, everybody's going to say, "Yeah, it makes perfect sense, you want to try and minimize harm or loss of life." But when it comes down to it individually, when they have to make that purchase ...

O'Reilly: ... what if somebody's 90 and has cancer, versus a newborn baby?

Shen: It's not that black and white, obviously.