Allow self driving vehicles to communicate with one another. That way, in such a scenario, multiple vehicles could work in unison to allow a safe pathway out for the car in danger. This would, of course, require very rapid communication between the vehicles. It would significantly lower the chances of this happening, but not prevent it in every instance.
+Jacob Harrison But in this scenario outlined above, all the other vehicles are driven manually (the SUV is ambiguous), so that still leaves us with the dilemma.
+Jacob Harrison That do not solve the problem. First of all in first stage we want to self-driving cars and normal human drive cars to share the same roads. Maybe in the future it will be illegal to drive your own car, but for now we want to have an option. Secondly as you mention it will only lower the chances of accident to occur. There still will be some accidents so we need to solve this ethical dilemma.
At first I would have crashed into the motorcyclist with the helmet cause of the lower risk but after realizing that I’m hitting the driver just because they are following the rules and being cautious it really hit me.
@@Aditya-jb8xw Well the situation is you have to hit someone or you just try your best and risk your life. So it’s upto you if you want to risk your life or theirs, but if you have to hit someone it’s better to hit the guy without the helmet. You don’t know for sure that they are in an emergency. Unless you want to kill yourself maybe, better hit gym hat guy then the girl wearing the helmet.
If it reverses, it may slam the car behind you, if it parks, there's a probability between being fine, or being slammed by the car behind us, or being hit by that bale of hay.
I see that most comments missed the main point of this video. The given example is designed to tackle the ethical dilemma by assuming the choice between different victims is inevitable and you guys are just trying to run away from it by reversing the assumption itself saying it can be prevented somehow. BTW the assumption is fair and almost certain to occur especially at the early stages of self-driving vehicles.
Yes! Someone gets it! When I tell people I legitimately would pull a lever to save the five and kill the one or whatever people look at me like I'm crazy or like I'm a psychopath.
I just want to say I think this is a beautifully realized piece of communication. The pace of information flow is accessible without being patronizing, the text is very well structured and delivered, and the graphics and music support it perfectly. Good job, people!
That is if you only see 3 options. I see more. If the other cars on the road are also self driving, they can all instantly start to make way for you to minimize damage and there is a good chance everyone can walk away. Otherwise, you can still swing towards the car, but not smach him. Just get mostly out of the way of the falling thing. And blame goes to whoever tied down that load.
I'm glad I'm not the only one who thought of this option. if our cars communicate to each other than the motorcycle goes into the shoulder to make room for the car. it actually improves everyone's odds of never getting in a wreck.
+Kaleb Bruwer The point is that at some point, somehow, somewhere an unavoidable collision situation will present itself and the car will have to make a choice. We can argue and implement infinite amount of safety measures to avoid this from happening, but at some point we need to assume every safety measure failed and a choice of collision need to be made.
Now here's another ethical dilemma: When we get to the point that most cars are self-driving and are 10 times less likely to have an accident, should humans even be allowed to drive on public roads at all?
purpandorange i get that it's a joke, but I googled a bit and found some studies saying female drivers are more dangerous and others that say male drivers are more dangerous. Anyway seems like female drivers make more mistakes, but male drivers take risks and behave more aggressively in traffic. Can't say I'm too surprised about this
Yeah. Exactly. Once we aquire the technology, it would have immense benefits for the traffic as a whole. So in my opinion, yes, people should not be allowed to drive
This is very interesting to think about. It also raises an assumption here: In this scenario, if all cars were self-driving, could they sense everything around them or not? If so, the other cars may be able to 'see' this event happening and account for it to move out of the way to make a gap for the oncoming car that would otherwise hit the object. However that seems to assume a perfect overseeing system, probably too difficult to create at this stage. Awesome to think about tho
What would work is the cars can connect with nearby cars. Car senses danger. Cars tells other cars it will do the following. That car says it will do that to avoid. The car behind stops. etc. Complex, though
If all of the participating vehicles are self-driving that would in theory work but it doesn't solve the underlying problem. I mean of course that would be something we would want to implement since it's minimizing danger to humans but the scenario is a hypothetical so even if you can avoid this specific scenario you could always find scenarios where you're back to making this kind of difficult decision which is why its a problem that needs to be solved
Laser light sensors would be the fastest form of sensors than radio or video sensors. Just need them to be really sensitive without bring a nuance to people's and other living creatures in and outside - eyes and other sensors.
Your analysis sounds fine yet it does not capture the essence of the dilemma. If the neighborhood car makes a sudden turn, it may affect other cars, not necessarily but likely. There will be a cost then. And there will be a chain effect for this incident, and the cost is hard to estimate. The essence of the story is that any emergency may pose a decision-making dilemma (may not be fatal but to a certain degree) for the AI, who would find it hard to reconcile between efficiency (for what it is designed) and moral requirement.
I've gone over this very topic several times in the last few years and have come to the same conclusion each time. The only ethical way of handling accidents with self-driving cars is for each of them to prioritize the lives of people *outside* of the car. That means they will always choose to put passengers in danger before putting outsiders in danger. This way, it's the passenger's conscious decision to trust their life to a car the moment they enter it.
+Lutranereis Very well put. Using a self driving car would imply accepting responsibility for the decisions the car makes on the road. This is why you'll still always need to be alert, and able to override the controls in the event that you don't agree with the decisions the car is making.
+Lutranereis Well, if company A offers the car you suggested and company B offers a car that saves the occupant then I'm going to buy from company B. Of course you could have the government force company B to act like company A, but government regulation of the economy is always immoral in the first place.
+Xezarious42 This could be regulated by holding the driver responsible for the decisions the self driving car makes. Company B's cars would inherently be more prone to cause harm to others, even if they're slightly safer for the occupants. Occupants of company B's cars would thus suffer more liability on account of their car's decisions in the long run. The argument only works in a scenario where you could be severely hurt and would rather sacrifice someone else to minimize your harm. But you could never predict that if you buy company A's car it would save your life and company B's car will get you killed. Both companies would have good safety track records if self driving cars become common. Ultimately people will choose the car that is least likely to hurt someone or cause damage regardless of if it's self driving or not, because they're responsible for that damage.
The self driving car should theoretically be seeing and predicting any eventuality. It uses sensors to see what the human eye cannot. So theoretically, it could see the boxes becoming more unstable within the truck and therefore take corrective measures (like slowing down and giving more distance to the vehicle in front) before the boxes unload. Swerving should not actually happen with a correctly programmed self-driving vehicle, because it can see and predict the danger before it actually materialises.
They mention later in the video that reality wouldn't play out like that, but that is not the point. The point is, should the AI choose? In reality, don't tailgate. Don't crash into anyone. Obey traffic rules. There's no need for ethics to be involved.
It would be financial, but it wouldn't be about liability. People aren't going to buy a car that prioritizes causing minimal damage over the life of the driver. And they can't really be blamed for that since some people will have their kids in the car. So that will more than likely be what would happen.
I think the biggest hurtle will be getting through the phase when both kinds of cars exist. When all cars are self-driving and especially if they can coordinate with some central controller accidents will probably be almost non-existent that it really doesn't matter which way you go. However in that phase I expect huge incentives to switch to self-driving cars, for one thing your insurance premiums will likely be much much lower, much so to justify the cost of upgrading your car.
+purpandorange It depends on how public the information on this niche case is. If it becomes a selling point for cars then what you said will be true, but if it's hidden away in the code and nobody but the producers are even really aware of it, then FortNikitaBullion's idea is totally plausible.
With a set standard of having to prioritise minimal casualties and then minimal damage, either set by a group or government regulation, I think that could be avoided almost entirely.
You can put a constraint on the self-driving car that it is not allowed to follow so close to a vehicle that it cannot stop in time should something fall off the back of it. That would avoid the posited decision entirely.
+Crispy Druid It is only a though experiment, that pin points ethical problems. Even if you eliminate this particular situation, the problem still is there. Random situations will occur and accidents will happen. For example instead of box falling from the truck, something could fall from a bridge above, in forest a tree can fall, in mountains a big rock can just fall (some roads in Montenegro are famous for that). You can not predict everything, you can not avoid all the risks.
+Crispy Druid wow so shallow thinking.. that was not the point of the video at all. there will always be a situation you can't predict. you'll learn that eventually.. when you grow up
If the car is intelligent enough to identify different scenarios at the point of the incident. Wouldn't it be intelligent enough not to drive too close behind a flatbed truck with an insecure load?
If and when all cars (and motorbikes) are self driving, they could be connected to a large network. The car spots danger and notifies the motorbike to the right to move out of the way. Connecting all self driving cars could solve a lot of problems
+Westloki It may not be self driving, but it will be handing telemetry to the other vehicles and also warning the rider of hazards. You don't need all vehicles to be self driving, once most of them are they'll be able to keep the human driven ones safer too.
Because motorcycles are fun? Because for some people, an excepted amount of danger is thrilling? The same reason people climb mountains without safety gear today. Expecting everyone to take the safest option or assuming that for everyone the safest option is inherently the best, or worse, the only option is foolishness. YOU FOOL! YOU FOOLISH FOOL! YOU FOOLHARDY FOOLISH FOOL!
It's a bad system that would allow itself to be boxed in in the first place. If there are no contingencies, create contingencies. This is especially true once all cars are automated and are all just part of a larger system. All should be able to compensate for any other under any circumstance.
+Abraxis86 Agreed. The only unavoidable accident I can imagine is one not caused by other vehicles. Like, say, a piano falling from a tall building in front of the car.
I mean, I think you guys are kinda helping my point. Neither of those situations are caused by other vehicles. But honestly, I think there is technology that can be used to mitigate risk from those types of situation. Thermal cameras, for example, would probably go a long way to preventing those. Traveling slowly in high pedestrian areas, which is already in effect, but with far better attention and response from an AI car versus a human. Redesigning roadways is another possibility, like creating animal crossings under major roadways, already being done in a lot of places. There are solutions for almost any situation that you can think of.
jed92y it doesnt matter programmers will still be faced with this dilemma because not every road in the world is going to be immune to wildlife or even has the funds to completely reinnovate their infrastructure...
This COULD be countered by another advantage of self-driving cars: instant communication. Let's look at this scenario again. A heavy load of objects falls on front of your car. It swerves right, but avoids any impact as every other vehicle (being self-driving) immediately moves in sync with your car, organically dissapating the distortion in traffic caused by you not dying. We don't make a dangerous vehicle drive away from the rest, we make self-drving cars and regular cars drive separately.
Ryan Low signals take time to send, recieve, process, and respond to. these cars will not be talking to each other directly, but through servers over an LTE connection (or other long range wireless) so just relaying the signal will take a fair few milliseconds to seconds (probabily a few too many). If cars were to beam directly to one another, not only would that pose major security risks (imagine someone faking such a signal and launching your car off the highway to avoid a car which isn't even there), but handling new connections all of the time will also be a nightmare on the software side of things, not to mention that under poor conditions the signal may not even get through (although this is dependant on tech they use). overall car makers are advertising communication between as a system where cars help other cars in the web to adjust to road conditions etc. there would also be the whole new ethical problem where the speedy response to the relayed signal causes the respondant to have an accident (say slip mid rapid lane change)since physics is the limit to what software can account for (i.e. sudden crosswind in slippery conditions whilst rapidly changing lane is a recipe for disaster). there also the situation where the other cars may not be self driving, which is the most likely real world scenario. regardless, this video is not focusing on avoiding the situation, but how a car would react when something like this inevitably happens.
@@zoravar.k7904 so, would for example making each car send their position in realtime to the server, and then, when the accident car sends the signal, the server calculates the order each car in the web must follow, work? Also, if the car must choose, it will probably choose the smallest financial loss choice. That is, the one that makes you lose the least money.
@@norielsylvire4097 It could work, but that would require every company to use a centralised system. I think a solution where each car avoids the accident using just its own data is the realistic implementation here. Since then each subsequent car would avoid the accident using its own sensor data. And if there are some industry standard response strategies, then a mass response to an accident should be predictable despite the fact that each car is working independently. A beacon system similar to TCAS could also be used where cars tell each other which direction they are headed. But I don't think coordinating strategies would work any better than all cars using a predictable response. Not to mention in such a split second response things like packet/connection loss could mean chaos if we rely on a server. This would also consider non-self driving cars since it is a dynamic system.
@@zoravar.k7904 the reason I had for using a server that way is because if every car uses its sensors and calculations to determine a movement, each car would take some time to receive the information, to process it and respond to it. In that scenario, however, assuming a separation of 1.5m between each car, the server could send an order to 10-15 cars in a line on the other lane to decrease the distance between them by 10% leaving enough space for the car in danger to slip in and avoid every collision. The server could tell every car behind the obstacle to slow down to avoid a collision. That would be quite effective. I did think though, of connection loss. A posible solution to it might be to build signal repeaters every now and then or even using Li-Fi under the road to communicate with the cars, but I admit that I don't know if it would be finantially proffitable. As for all companies having to share a server, that isn't really a problem. Think, for example, of how every company uses WinRAR and doesn't program their own RAR compression/decompression algorithm. In this case, the server would be run by a company dedicated to this server. And every car would just use it the same way every company uses Windows computers or uses light bulbs made by someone else instead of making their own. In this kind of scenario, the cars would be built to send sensor information to the server, and the server would do most calculations. Separating the work and letting some companies do a part of the job, and others do other parts is a good way of improving the results. So, basically, a self driving car would be the sum of the car with sensors built by car companies, and the calculations from this, one server.
I predict that future cars will continually be made structurally stronger and safer for occupants during a crash instead of having programs to tell it to swerve.
+boy638 I understand your point. The question is, to what extent is 'stronger and safer'? What if the truck in front of them were carrying blocks of concrete? What about logs of wood? Or if they were to spill industrial chemicals down your car? Would 'facing the obstacle' still be a choice or would swerving and 'avoiding it' be much more safer?
In the video, the man said that if we were driving in that boxed in car in manual mode and which ever way we react, it will be understood as just a reaction and not a deliberate decision. Self driving cars are said to be predicted in reducing traffic accidents and fatalities by removing human error from the equation and there can also be other benefits like decreased harmful emissions and minimize unproductive and stressful driving times. This video is talking about the ethical dilemma of self driving cars. He also mentioned that could it be the case that a random decision is still better than a determined one designed to minimize harm. I think that it is telling us about making our own decisions, the correct reaction and response and helping us learn about technology ethics and that although reality sometimes may not play like our thought experiments but it is not the point because they're designed to isolate and stress test our intuition on ethics.
Everything you said is true, but ending the video there does more harm than good. Firstly, these issues aren't specific to self-driving cars; a human would also struggle. Secondly, more emphasis should be placed on the fact that self-driving cars objectively kill fewer people
I love this subject - because not only is it going to obviously save lives - it's also advancing a broader understanding, conversation, and debate about ethics which benefits us all. People are still going to die, but damn, far fewer than today.
That's not what is going on here. This is a worst of the worst case scenarios. Everything fails, and the car can only make one last decision. This is a serious question programmers and self driving car designers have to worry about. Not everything can be accounted for, and not everything will work perfectly. Things fail, and the goal is to minimize such failures. However, there is always a possibility of absolute failure. Just think about the Apollo missions for a second. They tried to account for every possible scenario, and yet, things still failed and unknown potential errors came into play.
That’s not how hypothetical situations work, there will still be mechanical malfunctions and “acts of god” that will make these kinds of scenarios unavoidable.
One thing to consider is that self driving cars almost entirely eliminate human reaction time. This means the car can make decisions significantly faster than humans and dramatically reduce the chance of a collision taking place. Granted there will be some scenarios where a collision may be unavoidable, but in practice given collision rates of self-driving cars vs human-driven cars, especially when you consider evolution of self-driving cars working together on the same road sharing information with each other in order to even further reduce risk, I don't think this will be a problem.
It will be a problem much rarer but it can't guarantee the elimination of similar situations in the future which is why its a problem we need to solve sooner or later, especially since its applicable not only for self driving cars but basically any situation where you need to decide between multiple human lives.
Program the car to take the action that is least likely to result in injury/death. If anyone of the 3 choices given in the video will result in a fatality then it's inevitable and thus the choice doesn't matter so long as the path least likely to result in injury/fatality was taken as the result would be identical if not worse in a manually controlled vehicle. The argument of premeditation holds little water as the individual hit by the vehicle is injured/killed regardless of whether or not the decision was pre-programmed, or instinctual reaction.
+DaCamponTwee But the issue of being responsible remains if it is pre-programed where the programmer has to determine one's probable fate in such a scenario, which they would not have to otherwise, can they do that?
Mohammad Tausif Rafi I have no moral problem with it. It's sort of like the 'murdering 1 person to save a dozen people' scenario with the train that Ted did. It is most logical to take the course of action least likely to result in the most injury.
+DaCamponTwee I think they implied that hitting the helmeted motorcyclist has a lower chance of killing them. So by the 'least injury/death' logic the car should always hit the driver with the helmet on in this situation. The problem with this is that it's basically punishing the motorcyclist for being prepared and rewarding the other one for being unnecessarily vulnerable
+Jim No, the car should just brake in a straight line. Any incident that means the car can't come to a 'safe' crash speed before impact means the car has a huge risk of losing control if it swerves. The car should almost always brake in a straight line. Besides we're talking about extremely marginal cases here. The very example exposes that we aren't thinking of AIs properly. The car will not let itself get boxed in. It will not follow a 'risk' vehicle (something that may have cargo fall off it) at a distance that means it can't come to a complete stop before hitting anything that might fall off the back.
@@thalespro9995 it'd be unlikely, like very unlikely in the example given because, well, all cars are automated in the riddle. it's only things like motor bikes (and that's a stretch) that aren't.
if it was following proper traffic laws regarding stopping distance yes. but in the scenario it says you can't stop in time. therefore it is tailgating.
1:21 Exactly! This is NOT something we should be highly concerned for now. We should start by turning our focus into developing the technology and educating the general public that YES, self-driving cars ARE SAFER, as they have a much MUCH lower rate of accidents per distance travelled than human drivers do. Cases like the one depicted in this video will rarely occurr and wil defintely ocurr much less than in our current roads, dominated by human drivers. In my opinion, videos like this one are just damaging the general public's perception of self-driving cars and pushing it's development even further....
I disagree. We should not accept any advance in technology like this without question. Self driving cars, when there are accidents (and there already have been many), present many ethical dilemmas, and we need to be sure the people in charge of solving them will do right by us. We need to think more about what could happen, not just blindly accept progress. I would love to see the day where most cars are self driving, roads are safe, and travel is enjoyable. But not at just any ethical cost.
@@josephcowan6779 you're then putting the idea of justice ahead of the life of many. 'Cause we know for fact that less accidents are caused. The tricky part is just who do we blame. Is it truly that important?
@@MPRF12345 It could be. The tech has only just rolled out. We can expect to see many mishaps with it in the near future, even realizing that in a decade it will probably be much safer for everyone. But in the meantime, yes we need to ask whose fault is it, and is it just?
This video made me question my utilitarian philosophy big-time - going well beyond the efficacy of self-driving cars. I realize now I was just generalizing by saying I believe in utilitarianism - saying stuff like "to minimize harm" or "to maximize happiness". But, as is usually the case when it comes to one's personal philosophies/ beliefs (especially if said "one" is twenty two), it turns out life is tricky and many a scenario could be thought up that leads to conflicting answers using the same basic rules.
Im utilitarian too, but with one exception. I dont expect anyone to sacrifice their own life. Some people are willing to do it and thats fine, but expecting everyone to do it is asking too much. This solves a lot of ethical dilemmas and trolley problem variants where the persons own life is involved. Its not that they believe their life is worth more than the lives of the five people tied to the other track, its just that self sacrifice is too much for most people, and thats perfectly understandable
2:20 neither, both are pretty much guaranteed to die. it's which ever one has a leather jacket AND a helmet one because a leather jacket will actually reduce physical harm.
In the example shown there was ample space to slow down and (partially) change lanes to very close to the motorbike or the other vehicle. The box was a side that could be avoided easily. Self driving cars know *exactly* how big they are, how close they can get and are designed avoid the situation to begin with.
in this particular example I think it should be obvious the car should should apply full brakes and ensure the impact in order to minimize harm to others surrounding the primary accident. I also believe see a future where car accidents can be 100% removed. sensors should have notified the truck its load was unstable.
+AirTimeEh Well, in that case you would have to have sensors in the strap connected to the vehicle, and how would it know if it is unstable or not because to the sensors it would just think that they were just reducing the load instead of it being unstable.
+vids and clips or vibration sensors in the bed of truck, or cameras on the highway monitoring cars, or by not manufacturing trucks, or having trucking regulations capable of such a flaw.
+AirTimeEh And/or 'bots load the trucks. Right now it's people with forklifts and tie-down straps. In the future it could be 'bots and an advanced cargo containment system that works well with them.
***** There will always be the possibility of manually driven cars on the road, and a system like the one you're proposing would be crazy difficult and time consuming to implement throughout multiple car companies' products.
the thing also is option 4 hit the breaks if there is enouth space between you and the 1 behind you the only thing that happens is you hitting the objects with slower pace and minemizing damage to you and the surrounding. the other is there is still momentum in the objects falling so if they fall and you hit the breaks (and holding them to hit the breaks for others behind you that didnt see you stop) you would minimize damage by still crashing into them but with less damage then needed.
This is one of those cases where the ejector seat that I saw in a movie would be useful. The seat would propel you upwards, which would allow you to leave the car if it crashed.
+Jacob Duus Gundesen except if you go to war, you know what you are getting into, and you probably trust and care about your fellow soldiers. not everyone wants to drive a car that will sacrifice you to save some person that you've never met and don't care about say you were at a grocery store with 9 other people, when suddenly someone with a gun enters and says "one of you has to die so that the rest can live." what would you do? would you really be so quick to give up your life for total strangers? do you value yourself and your life so little? it's easy to make these decisions on the internet from the safety of your own home when talking about a fictional scenario that probably won't ever happen, but if the time ever came, i don't think it would be so easy to make that choice for real
Ok, but here's the thing, if someone with a gun enter at a grocery store and threatens 9 people, shit is not happening to you. Shit is happening to 9 people all together.
+Saulo Goki But then again, you can make this argument to any road accident that involves multiple wheicles and drivers. I don't think its about you or them, its about an unfortunate situation.
See, if a bridge with 9 cars falls, it is one big shit happening to multiple people... If a giant block fall over my car and I throw it over the side lane by reflex, I'm being part now of the shit happening on other people. ...but then, I know this is much more complex than 0 and 1
Simple concept since I can tell you we are a little ways away from putting a system on a car that can determine head gear. My thought brings it back to a more simplistic approach based on instinctual nature, just now programmed. I'm going to give my thoughts in a stepped up version starting with basic and simple up to higher complexities. My view is that of someone going into electrical engineering, as well as a fireman on a department that services two of the most heavily used, midwest interstates. I have also added at the end of each order paragraph a summary of what devices and/or systems will be needed to include that order in a vehicle. First order: All else fails, vier right, slow down. In countries where you drive on the wrong side of the road, vier left. This is simple as it would cause you to push you or the others into the shoulder away from oncoming traffic. This minimizes the chances that someone will be pushed into an oncoming lane and, assuming everyone is paying attention, will allow the person on that side to divert to the shoulder. This requires no input device other than danger detection. Second order: Vier toward noted/mapped shoulder including grass areas w/o steep drop. No shoulder, Stop. This will be used in places where there may be a closer shoulder on the opposite side or in the case of construction where the shoulder was moved or on a bridge with no shoulder. This can be maintained by adding a single bit of information to all routed roads accessible to the car and updated based on reads by it or other cars. The basic system requires only a stored map of safe shoulders. Third order: If all local lanes are moving in the same direction, and there is a large gap, vier towards it, including the inside shoulder of a highway. This is the first step in complexities relying on simple wide angle cameras or any form of distance and object detection already used. In this case, the motorcyclist/scooterist could potentially take the penalty; however, they also have the smallest footprint in a lane and the highest capability of accelerating out of the way. This method will only be used if their presence is not detected, either based on them being a distance from the sensor or any other occurrences that throw off the detection. This requires only local awareness technology already in use. Fourth order: Large gaps detected nearby by others, vier in direction. This begins a much more complex area whereby the cars start communicating with each other. This networking is already being put in place by modern self driving cars or assisted driving cars. This order relies on the other vehicles informing your car of potential avoidances and where it can vier with the same idea as moving toward the right for the shoulder, only this time its a moving shoulder/gap. This requires sensor' information to be broadcastable locally, possibly over a decided wifi channel. Fifth order: Communicate to other smart cars to make a necessary gap to allow yours to divert. This is merely an increase from the previous order adding the complexity of all the local cars determining the optimal direction of movement. This requires the computers to link and process all information quickly and accurately, then send out instructions. Sixth order: No detectable oncoming traffic, move to oncoming lane, then back immediately. This is put farther down the complexity tree as it would require a good bit more in communicating systems before relying on a car to determine an oncoming lane to be the correct choice right away. Even if it is observably empty. This requires sensors either on the road or for every vehicle to be required to broadcast position reliably to a system immediately accessible to the vehicle. Through these six orders you might note that I haven't actually addressed the given matter directly. This is because that would rely on deliberately deciding to hurt someone. I indirectly stated my decision in the second order though by saying that if the car can determine no shoulder, it is to simply stop. This decision is due to my experience from recorded crashes and participating at accident scenes as a fireman. If there is a shoulder and people can move to it, damage and harm will be minimized by diverting in that direction. However, if there is a barricade on both sides, such as is often the case with a bridge, then diverting will cause more damage and all involved passengers will end up hurt. If there is a guarantee that everyone will be hurt extensively, then minimizing the number hurt becomes priority to improve the ability to save those hurt.
Finally someone who gives a logical answer. While yes, it doesn't directly answer the question at hand, it does, in essence, solve it. In these situations, you do give an answer based on possible scenarios, understanding that there is a possibility that someone gets hurt. Most people in these comments want to simply say break or question the validity of such a question, but you actually address possible programming solutions.
How about rolling a quick die? The car has three options: if it lands on 1, it turns left; if it lands on 2, it stays straight and the passenger dies; if it lands on 3, it turns right.
The solution to the dilemma is to have the cars network with each other so that they always leave each other escape routes. This would not require all cars to be self driving and networking either as long as about half of them were, and before we get to that point, simply program the self driving cars to follow at a greater distance.
That's easy. The self-driving cars can be programmed not to go beyond 25 Mph, and to keep some distance from other cars. In that case, the decision will be which driver will be slightly injured...
This is why vehicle to vehicle communication is SO important. If a car can communicate with all other vehicles in its vicinity and those vehicles react to one another, together they can find a way to minimise damage for everyone involved.
but at some point, some computer has to make a decision: who's life is prioritized in the event of unavoidable catostrophe? And at that point, should the decision lie with the autonomous machine or the human? It is a case of real life trolley-problem that we will soon face.
yea this lowers the chance of a crash happening, but does not take into account pedestrians walking road conditions etc. so the ethical decisions still have to be programmed and we still have the same issue.
False dilemma there as in the first place the minimum separation distance could have been designed to give the car the necessary stopping distance to avoid any swerving.
Just go in the direction of least object resistance (scan and see where there is the least stuff/biggest gap). Additionally invest in front and rear signaling that a stop is about to occur such as a louder horns and emergency blinking leds
+Viktor6665 in this scenario it said the car doesnt break in time and that option would cause the most damage to the self driving car instead of swerving left or right
I agree with Viktor. Anyways it is a made up dilemma. Build self driving trains and trams to keep the schedule. Driving car is a life experience, it should not be automatized.
The best way is to always maintain a minimum safe distance between the vehicle on the front in which we the car can safely stop itself without swerving and the distance will vary according to the speed of the vehicle
The answers are obvious: The car will do everything in it's power to save it's occupants, but it will never deliberately hurt someone else to do so. Furthermore, as previously mentioned, if the cars could communicate and connect with other self driving cars (within a certain range), they could coordinate to avoid accidents--for example, the car on the left could move and give room to the central car, etc.
I disagree I don't think the answer is obvious. The car may do everything it can to save the occupants if thats how its programmed and what about if the car will do everything it can to prevent a major accident however in doing so could cost the occupants their lives? You say that the car will never deliberately hurt someone to save the occupants but you have absolutely no way of knowing that for sure. The video already demonstrates this that what if the only way for the car to protect to you is to purposely slam into someone else? That is deliberate. The cars communicating with each other is a good idea and again what if there is something that that AI cannot do or the AI calculates that if the car moves to the left it could cause another huge accident? Is it more important to protect the occupants or reduce the severity of an accident regardless of risk to occupants?
I have thought about this for a long time, even before I watched this video. The conclusion that I came to was that I think the only possible solution for manufacturers is inaction. If an ethical dilemma is detected, do nothing. If it kills the driver, so be it. If it kills 10 toddlers, let physics take its course. Otherwise, it runs into an ethical minefield that nobody want to tread.
@@avonord A car that is programmed to take no action to save the car owner will never sell. Who would buy a self driving car which is specifically NOT designed to save your life? Surely, to the car owner, saving one's own life is the highest priority. Consumers would buy a car that is best for saving themselves.
technically speaking you do have time to stop. as seen here to the left: 1:05 there are no cars behind you. the story also seems to take place on the highway. by stopping suddenly your speed would drop fast which may help you avoid the heavy objects. and chances are the car will stop suddenly anyway if the objects suddenly appear in front of it.
The best choice for the car manufacturers would be randomized decision making Each case would have equal probability of happening In simple words if the same scenario is repeated 3 time , each outcome will have a probability of 33%
Nope. I don’t think that will be right decision. Would you bet your life ( quite literally ) on the outcome of a coin toss ? I know I won’t. So we have to come to conclusion to what the priority will be.
Two suggestions: 1) Limit the car's ability to make value-based judgement, such as modeling other vehicles as mere objects so they're all options are equivalent. This would be more like the current situation since humans can't process enough information at that speed. 2) Program two modes a) altruistic mode which minimizes harm even if the driver must be sacrificed sometimes, and b) selfish mode where the car would try to protect the driver under all circumstances. The car would default to selfish mode, but the owner can switch it.
In my opinion, there should be a middle ground where the car will always try to protect the driver but also minimize harm by choosing the object which is least likely to hurt anyone
this is why some companies are doing mass surveys, then telling the car what the majority of surveyor's answered, that was it is, in a way, transferring the decision to the people
We have failed versions of that, other than bus lanes that most people tend to respect, truck lanes unfortunately do not work as intended, and neither bike lanes. The amount of people who still drive their cars over that tiny path is sad.
Want to kill someone? Find out when they are driving close to a cliff and hire 10 people to stand in front of the road! The car will drive itself off the cliff to save the 10 people. Genius!
@@ashleapatterson82 this is a worse case scenario that he is talking about, and there is no time to avoid a collision. You can try and break all you want, but breaking doesn't make you immediately stop.
The answer is to swerve as minimally as possible (for a near miss) in the direction opposite of where the objects are falling to; and the reason this isn't a problem is because if most people are using self driving cars they should be able to cross communicate which means whatever is in the way of the direction that the car swerves in will react to avoid collission and if that were to cause another collision this would be repeated until there are no more oncoming collisions. This "dilemma" is vastly underestimating the capabilities of AI including the speed at which decisions can be made, it could potentially decide to drive further away from the car with the objects on it so that it has more time to brake and avoid accidents, and the speed at which it can send messages to nearby self-driving cars. If we had a mandatory feature in all cars and motorcycles to have a self-driving AI that can override the manual driver for any such collisions, this could never be a problem. I.e. to solve this we have to reverse the current situation of autonomous driving, instead of self-driving requiring a human observer, human driving should require an AI observer. At least on public roads. In other words, I think this video is completely idiotic, a dilemma created by someone who falsely thinks he understands what he's talking about.
The video is based on a very misguided premise: An action of a human in a emergency situation like this is instinctive/impulsive, while that of a robot car is premeditated/programmed. If programmed strategies are to be called pre-determined/planned/premeditated, then the impulsive neural pathways in a human body (which have developed over years through experiences/learning) that are responsible or taking the action should also be called pre-determined/planned/premeditated. At emergency situations like this, neither a computer can crunch moral algorithms to decide a strategy. It needs to respond using a very low-level algorithm based on the obstacle states around itself. Very much like the short-circuited neural pathways that tell a human what to do in that situation. This is an interesting philosophical debate for those who don't understand how AI works, but does not or will not have any practical relevance.
Eric O In that sense everything in the universe is predetermined. If you know the state of every elementary particles in the universe, what the state of those particles will be a minute from now is completely predetermined. "Free will" is just an illusion. If you carefully set the initial conditions to be same, at the level of atoms, and run the same scenario twice, the exact same thing will play out both the times. Of course it may be very sensitive to initial conditions, as chaotic systems are. But that's true both for humans and algorithms. Humans are not beyond the laws of physics or out of the universe (unless you believe such things). The exact same input and initial state (created by the history of inputs) will evolve to exact same state in the future.
+Eric O Yes, but not necessarily in a way that the programmer can know. Self-driving cars use machine learning where they learn from past experiences (and in the case of Tesla, from the experiences of every other Tesla car in the world). For example, it might just do what another car did in a similar situation which allowed it to get out safely.
this may seem pretentious and a little hard to grasp but the "random reaction" that humans make isn't really random, isn't it? there's a lot that goes in our actions that is explicitly decisive in our subconscious and not in our conscious mind. what a lot of us consider instinct and/or intuition isn't really that, is it? it's a lot of moving parts that we take into account and act accordingly. that's why objectively correct decision making can feel off and how when there's a missing piece in a situation, it can likewise feels off without us knowing the better. idk im rambling, food for thought?
My guess would be that the car would swerve towards the motorcycle. This would not be out of malice, but it would be due to the fact that there is more space for the car to potentially avoid a crash if it moves in that direction. In other words, the car would move away from the more blocked off area and try to avoid any crash regardless of if one is inevitable.
I think that in the future as more and more vehicles become "self driving" they will be connected together so vehicle avoiding deadly accident etc. a kid on road would tell the car next to it as well so they would manover together so that noone would get harmed or the damage and danger is minimal
While true, programmers do have to consider undesirable and unlikely situations. That's really the basis of the problem. It's easy to account for the most common problems and situations. It's hard to try and account for the rare and ethically challenging problems.
That kind of direct "pay to not be killed" plan would become public and unpopular too quickly. I'm not saying that something similar might not occur, but it would at the very least be more circumspect and harder to see through.
To me (as a current C.S. Student, and 20 years of professional electronics and programming experience), the solution is actually quite simple. Don't allow any A.I controlled vehicle to create a secondary accident. The only choice is for the vehicle to crash into what is ahead of it. First off, if the vehicle following the truck is following normal rules of driving (2-3 second rule), then there is no reason why the vehicle can not come to a stop before colliding with the objects falling from the truck. If there was in-climate weather, than the 2-3 second rule should be adjusted accordingly, and it should still be able to stop. If the owners of the vehicle did not perform proper maintenance to allow the vehicle to stop properly, then the occupants around them should not be penalized for the owners negligence. In addition, if the vehicles around them are all A.I.controlled, then swerving into those vehicles could cause them to react in a way to create tertiary accidents. If the initial vehicle maintains its course, those secondary and tertiary accidents are avoided. Finally, with safety features of vehicles today (which I find annoying are cited in the video to justify a secondary accident, but not in defense of the vehicle hitting the objects falling from the truck), I feel it is more likely that the occupants of the vehicle will survive the accident, without the need to endanger anyone else around them.
Cause even if it does it's speed will carry it into the cargo regardless. It's either bang up the front of your car or swerve into the surrounding traffic. Besides the situation is purely theoretical in the first place
Actually it should have enough room to stop, because the boxes are not coming to a dead stop as they fall out the truck, they are just decelerating. Your car is able to decelerate faster if your not tailgating.
Eh... as soon as they hit the road, they would slow down significantly, even if they do roll a little. The car is depicted relatively close to the truck, so there's a good chance that they'd hit the car before it would come to a complete stop. Though, either way, it still is a theoretical scenario... you get the point it's trying to make.
I would argue that the kind of programming needed to determine if a car is safe or not is so much more complicated than just driving that it won't appear until well after self driving cars are common
Thankfully, driving so close to a car that something falling off will hit you no matter what you do is already against the law--regardless of if that car has anything that will fall off. SO, just program the safe distance for all cars, regardless of tie-down. Oh what's that? That's what the obvious answer was all along? Yeah, that's right. this scenario would never happen.
Nicholas imagine a car is tailgating you. You cannot stop him from tailgating. The truck now pulls into your lane. You have no other option. You cannot just say to avoid every situation. It is like telling a computer to avoid viruses by turning off internet. It is not a viable solution. There will be one moment when this happens, no matter what we do.
Nicholas Wright make it a child walking into the street with no time to safely avoid without causing a crash, so you either run the child over or crash
I agree the fact that driverless cars or self driving cars are good in terms of safety. However as car enthusiast it brakes my heart that I won't be able to go out for a nice drive like we do today.
The thing is that the companies will almost always choose to save the driver of the car because that's a lot more attractive to potential consumers. Sadly, the only way that any other option such as utilitarianism could be implemented would be government intervention.
The least a authoritarian controlling company like Google could do is prioritize their customer's life over other's. Or else why would anyone buy something from Google. (Google is a example)
In my opinion, it should always favour the lives of others over the passengers and drivers. If you choose to drive in a self driving car, you are accepting the risks (which, may I mention, are less) and show you are willing to take the hard end of the stick if it does end up in a crash.
But that doesn't make much sense. When you drive a car in reality, you also accept that driving is dangerous but YOU still have control over your steering wheel. Self driving cars would never turn a profit if the consumers know going in that their cars don't care about them. Why would anybody buy a self driving car vs keeping their normal vehicles? In this scenario, nothing has changed and nothing about the road is safer than currently.
Honestly; randomize between which to hit, and don't collide with the boxes. I don't think it's moral to decide who gets to live or die; both are just as deserving of life.
No car should be allowed to move out of its line. If some heavy objects fell on your line, first car should keep safe distance from car in front, so it could stop car in this distance and press brakes. Because moving to different side does not mean it would save car passengers, because car behind on other line can just heat your car in the middle and make two cars from one, killing all passengers. And it also will result in chain reaction as your car will become heavy object on road for other cars to avoid. So, all self driving cars should keep safe distance depending at speed and road condition and stay on they line.
I think a good solution would be to have an "emergency" button where you can press to take over. Then if there's an accident it'll be considered a reaction
Except you would only think to press that button when you’re actually about to crash. And unless you have inhuman reaction speeds, it’s probably too late to do anything.
this scenario might happen but the car would be at an appropriate flowing distance and could also be sophisticated enough to tell if the load is unsafe and tell the lorry ahead to pull over. an interesting topic but I think it can be ultimately avoided with better technologies
you should only be following a vehicle at a distance you can stop at.. shit has fallen off back of trucks and i havnt died because i keep a gap... maybe a self driving car can react faster and apply brakes harder sooner and fit more cars into smaller space and have less traffic sign me up
This is a wrong exemple. This situation should not exist. The self driving car should not drive so close to the truck. If the car were driving further away from the truck, it would have had time to stop and no one would have been injured.
I'm afraid you're ruining the point. Even if cars were not that close, there would be similar situations. Maybe not exactly same ones, but very similar ones. That's why we have to think about these kind of situations and decisions, I guess.
The car will follow the traffic rules. It will minimize harm to the driver but still follow the rules of traffic. It will not start an internal AI based ethical dilemma debate in the 1/10th second it has to react to the data. So if a crash can't be avoided, it will not kill an old lady on the curb because she seem to worth less to society then the young driver of the car (except she is the CEO of a big company and important to keep thousand of jobs... better use face recognition software and google to check her profile in that split second...). It will slow down as much as possible with a quicker reaction time any human has. So in reality the system will prevent a crash from happening in the first place by reacting to events you can't even see with your eyes but the radar detects.
F. S. Ha ha, if they go with face recognition, I imagine teens will go out wearing masks of famous people and jumping in front of cars, trying to see who the car will choose to "save".
iwasfrancisd Yeah, because that would help them. You can't wear the mask all day long and if you do, you are identified by that mask and your cloths and the home you left from. If a teen does jump in front of a self driving car it would be identical to a teen jumping in front of a car with a regular driver now. If the driver can break early enough the result will just be some angry shouting and if he teen gets crushed it won't be the drivers fault. Driverless cars then can even prove that it was 100% the fault of the dead teenager and the car won't have post traumatic stress problems like a real human. So a win-win for driverless cars.
+Jaden Oldfield Even if there is not enough time, I believe that the self driving car will still break, reducing the amount of damage and lower the risk of harm to the passengers and other motorist.
In reality, the self driving care would immediately honk the horn to alert nearby drivers, slow down as needed to get behind a vehicle on either side (also not approaching the objects as quickly), and turn into the lane with the largest opening between vehicles with the assumption that the car it is turning in front of will slow down to let them in, this can be done in seconds to avoid hitting anyone or anything. I have had cars turn into my lane quickly and I have always slowed down to avoid hitting them. The option chosen is the one with the highest probability of avoiding all collisions - this is easily calculated by a computer; its actually pretty obvious if you stop to think about it for a moment.
If you drive intelligently, you should always leave enough space between you and the vehicle in front so that if it stops abruptly (or sheds its load), you can stop within the available space. A self-driving car should be able to calculate this stopping distance based on speed and road conditions (dry, wet, icy etc) and braking performance, and avoid the problem altogether.
Allow self driving vehicles to communicate with one another. That way, in such a scenario, multiple vehicles could work in unison to allow a safe pathway out for the car in danger. This would, of course, require very rapid communication between the vehicles. It would significantly lower the chances of this happening, but not prevent it in every instance.
+Jacob Harrison But in this scenario outlined above, all the other vehicles are driven manually (the SUV is ambiguous), so that still leaves us with the dilemma.
+Jacob Harrison That's a great idea, but we still have to account for people who can't afford self driving vehicles.
+Jacob Harrison That do not solve the problem. First of all in first stage we want to self-driving cars and normal human drive cars to share the same roads. Maybe in the future it will be illegal to drive your own car, but for now we want to have an option. Secondly as you mention it will only lower the chances of accident to occur. There still will be some accidents so we need to solve this ethical dilemma.
+Jacob Harrison
but we still would need to answer those moral dilemmas because its not just about cars, a kid can run on the road
+I Don't Like Spam A very good point indeed
At first I would have crashed into the motorcyclist with the helmet cause of the lower risk but after realizing that I’m hitting the driver just because they are following the rules and being cautious it really hit me.
You can't kill somebody for not wearing helmet. They might have some emergency. So it's best to try and save as many!!
Not you, him.
@@Aditya-jb8xw Well the situation is you have to hit someone or you just try your best and risk your life. So it’s upto you if you want to risk your life or theirs, but if you have to hit someone it’s better to hit the guy without the helmet. You don’t know for sure that they are in an emergency. Unless you want to kill yourself maybe, better hit gym hat guy then the girl wearing the helmet.
@@Aditya-jb8xw well, we can't kill the responsable drivers either :/
@@CelVini Can you read properly?
The system should deploy its hidden rocket launchers to blow the boxes up.
@Patrik VV. LOL
100th!
Bruh this isnt a freaking 007 movie in irl
Additional dilemma consideration for you now, those boxes have kittens inside..
@@isaaca9123 but won't those kittens die anyway in such situation?
This video brought up a lot of points I hadn't considered about self driving vehicals. Very interesting topic, I'm curious as to where it will lead.
+Eyespelegud vehicals
+Javier sebastian And vehicles.
I think the vehicle should hit the brakes and a little reverse or parking or idling?
Jacob Griffin
you can't hit the breaks and reverse at the same time. The two nullify each other.
If it reverses, it may slam the car behind you, if it parks, there's a probability between being fine, or being slammed by the car behind us, or being hit by that bale of hay.
I see that most comments missed the main point of this video. The given example is designed to tackle the ethical dilemma by assuming the choice between different victims is inevitable and you guys are just trying to run away from it by reversing the assumption itself saying it can be prevented somehow.
BTW the assumption is fair and almost certain to occur especially at the early stages of self-driving vehicles.
I was thinking about the same thing. People are going too far from the original question to find their own answers.
We don't care, it's the comments, not the video, 🚗 just use the secret nuke to help with the boxes...
People see one benefit and don’t bother to think past it.
Islam shatta
You umderstand that most of these comments are jokes... right?
Yes! Someone gets it! When I tell people I legitimately would pull a lever to save the five and kill the one or whatever people look at me like I'm crazy or like I'm a psychopath.
I just want to say I think this is a beautifully realized piece of communication. The pace of information flow is accessible without being patronizing, the text is very well structured and delivered, and the graphics and music support it perfectly. Good job, people!
That is if you only see 3 options. I see more.
If the other cars on the road are also self driving, they can all instantly start to make way for you to minimize damage and there is a good chance everyone can walk away.
Otherwise, you can still swing towards the car, but not smach him. Just get mostly out of the way of the falling thing.
And blame goes to whoever tied down that load.
I'm glad I'm not the only one who thought of this option. if our cars communicate to each other than the motorcycle goes into the shoulder to make room for the car. it actually improves everyone's odds of never getting in a wreck.
I think you're missing the point of the ethical dilemma.
Marco Aurélio Deleu My point is we can avoid them all together
+Kaleb Bruwer The point is that at some point, somehow, somewhere an unavoidable collision situation will present itself and the car will have to make a choice. We can argue and implement infinite amount of safety measures to avoid this from happening, but at some point we need to assume every safety measure failed and a choice of collision need to be made.
Marco Aurélio Deleu That's ok with me, that is MUCH better than it happening many times every day.
Now here's another ethical dilemma: When we get to the point that most cars are self-driving and are 10 times less likely to have an accident, should humans even be allowed to drive on public roads at all?
purpandorange best comment ever
purpandorange i get that it's a joke, but I googled a bit and found some studies saying female drivers are more dangerous and others that say male drivers are more dangerous. Anyway seems like female drivers make more mistakes, but male drivers take risks and behave more aggressively in traffic. Can't say I'm too surprised about this
Yeah. Exactly. Once we aquire the technology, it would have immense benefits for the traffic as a whole. So in my opinion, yes, people should not be allowed to drive
Peter Kazavis I guess there will be special tracks for human drivers even if we decide to limit public roads for self-driving cars only.
It won't happen now, but maybe 50 years into the future. When our kids are all spoiled from self-driving cars and human driving becomes a niche.
This is very interesting to think about. It also raises an assumption here: In this scenario, if all cars were self-driving, could they sense everything around them or not? If so, the other cars may be able to 'see' this event happening and account for it to move out of the way to make a gap for the oncoming car that would otherwise hit the object. However that seems to assume a perfect overseeing system, probably too difficult to create at this stage. Awesome to think about tho
What would work is the cars can connect with nearby cars. Car senses danger. Cars tells other cars it will do the following. That car says it will do that to avoid. The car behind stops. etc. Complex, though
If all of the participating vehicles are self-driving that would in theory work but it doesn't solve the underlying problem. I mean of course that would be something we would want to implement since it's minimizing danger to humans but the scenario is a hypothetical so even if you can avoid this specific scenario you could always find scenarios where you're back to making this kind of difficult decision which is why its a problem that needs to be solved
Laser light sensors would be the fastest form of sensors than radio or video sensors. Just need them to be really sensitive without bring a nuance to people's and other living creatures in and outside - eyes and other sensors.
Your analysis sounds fine yet it does not capture the essence of the dilemma. If the neighborhood car makes a sudden turn, it may affect other cars, not necessarily but likely. There will be a cost then. And there will be a chain effect for this incident, and the cost is hard to estimate. The essence of the story is that any emergency may pose a decision-making dilemma (may not be fatal but to a certain degree) for the AI, who would find it hard to reconcile between efficiency (for what it is designed) and moral requirement.
The future where cars dish out street justice haha
Phonzo Cisne "I AM the law!" - 2020 Prius
Dred Tesla.
I think you might have inadvertently referred to Knight Rider...
+
That would be a great plot for a story TBH.
I've gone over this very topic several times in the last few years and have come to the same conclusion each time. The only ethical way of handling accidents with self-driving cars is for each of them to prioritize the lives of people *outside* of the car. That means they will always choose to put passengers in danger before putting outsiders in danger. This way, it's the passenger's conscious decision to trust their life to a car the moment they enter it.
+Lutranereis Very well put. Using a self driving car would imply accepting responsibility for the decisions the car makes on the road. This is why you'll still always need to be alert, and able to override the controls in the event that you don't agree with the decisions the car is making.
This is very true Lutranereis but more technology to make the rider more safe as well would make your theory perfect
+Lutranereis Well, if company A offers the car you suggested and company B offers a car that saves the occupant then I'm going to buy from company B.
Of course you could have the government force company B to act like company A, but government regulation of the economy is always immoral in the first place.
+Xezarious42 This could be regulated by holding the driver responsible for the decisions the self driving car makes. Company B's cars would inherently be more prone to cause harm to others, even if they're slightly safer for the occupants. Occupants of company B's cars would thus suffer more liability on account of their car's decisions in the long run.
The argument only works in a scenario where you could be severely hurt and would rather sacrifice someone else to minimize your harm. But you could never predict that if you buy company A's car it would save your life and company B's car will get you killed. Both companies would have good safety track records if self driving cars become common.
Ultimately people will choose the car that is least likely to hurt someone or cause damage regardless of if it's self driving or not, because they're responsible for that damage.
+Xezarious42 "government regulation of the economy is always immoral in the first place." Lol.
The real question is why that self driving car is tailgating the semi.
Salman Hussain 😂
Exactly what I was going to say
The self driving car should theoretically be seeing and predicting any eventuality. It uses sensors to see what the human eye cannot. So theoretically, it could see the boxes becoming more unstable within the truck and therefore take corrective measures (like slowing down and giving more distance to the vehicle in front) before the boxes unload.
Swerving should not actually happen with a correctly programmed self-driving vehicle, because it can see and predict the danger before it actually materialises.
EXACTLY WHAT I WAS THINKING
They mention later in the video that reality wouldn't play out like that, but that is not the point. The point is, should the AI choose? In reality, don't tailgate. Don't crash into anyone. Obey traffic rules. There's no need for ethics to be involved.
Unfortunately in the end the decision will probably be financial. Who do I hit to minimize the expected liability.
It would be financial, but it wouldn't be about liability. People aren't going to buy a car that prioritizes causing minimal damage over the life of the driver. And they can't really be blamed for that since some people will have their kids in the car. So that will more than likely be what would happen.
I think the biggest hurtle will be getting through the phase when both kinds of cars exist. When all cars are self-driving and especially if they can coordinate with some central controller accidents will probably be almost non-existent that it really doesn't matter which way you go. However in that phase I expect huge incentives to switch to self-driving cars, for one thing your insurance premiums will likely be much much lower, much so to justify the cost of upgrading your car.
+purpandorange It depends on how public the information on this niche case is. If it becomes a selling point for cars then what you said will be true, but if it's hidden away in the code and nobody but the producers are even really aware of it, then FortNikitaBullion's idea is totally plausible.
That's my fear🤧
With a set standard of having to prioritise minimal casualties and then minimal damage, either set by a group or government regulation, I think that could be avoided almost entirely.
You can put a constraint on the self-driving car that it is not allowed to follow so close to a vehicle that it cannot stop in time should something fall off the back of it. That would avoid the posited decision entirely.
+Crispy Druid I think they actually do that
they already do this and thus this issue is completely defunct.
+Crispy Druid Yes... but what if a debris enters the road? From an external source... a fallen tree or cow for example...
+Crispy Druid
It is only a though experiment, that pin points ethical problems. Even if you eliminate this particular situation, the problem still is there. Random situations will occur and accidents will happen.
For example instead of box falling from the truck, something could fall from a bridge above, in forest a tree can fall, in mountains a big rock can just fall (some roads in Montenegro are famous for that). You can not predict everything, you can not avoid all the risks.
+Crispy Druid wow so shallow thinking.. that was not the point of the video at all. there will always be a situation you can't predict. you'll learn that eventually.. when you grow up
Answer: make trucks with loosely binded cargo illegal
yes. better answer
It is
yes, WE think that will save OUR lives.
yes
Yeah, no one ever breaks laws.
If the car is intelligent enough to identify different scenarios at the point of the incident. Wouldn't it be intelligent enough not to drive too close behind a flatbed truck with an insecure load?
+sn0tkore
ye but it can be a kid who runs on the road, so the question still stands
+0cards0 I agree
+0cards0 Then maybe we need to put chips in new born to make them self-automated walking machines aware of traffic laws.
well how will it know it is un secure?
+sn0tkore Thats not answering the moral questions asked
If and when all cars (and motorbikes) are self driving, they could be connected to a large network. The car spots danger and notifies the motorbike to the right to move out of the way. Connecting all self driving cars could solve a lot of problems
Aka Skynet. The world will be hacked!
+Kai Widman
Surprisingly true.
But who would hack the highway?
Wouldn't that kill them too?
As far as I know, there are no kamikaze hackers.
+Kai Widman yep, that's what i'm afraid off. Everything is becoming hackable which is pretty scary
*****
just for it to fit into the scenario,
+Westloki It may not be self driving, but it will be handing telemetry to the other vehicles and also warning the rider of hazards. You don't need all vehicles to be self driving, once most of them are they'll be able to keep the human driven ones safer too.
Doing a report on self driving cars struggling to find content then I watched this and my mind basically melted
In a world of self driving cars, why are people still riding motorcycles?
... because we need a lot of organ donors.
Because motorcycles are fun? Because for some people, an excepted amount of danger is thrilling? The same reason people climb mountains without safety gear today. Expecting everyone to take the safest option or assuming that for everyone the safest option is inherently the best, or worse, the only option is foolishness. YOU FOOL! YOU FOOLISH FOOL! YOU FOOLHARDY FOOLISH FOOL!
I robot.
Also why not SELF driving motorcycles?
motorcycle riders get into fewer accidents
Because riding a motorcycle kicks ass.
It's a bad system that would allow itself to be boxed in in the first place. If there are no contingencies, create contingencies. This is especially true once all cars are automated and are all just part of a larger system. All should be able to compensate for any other under any circumstance.
+Abraxis86 Agreed. The only unavoidable accident I can imagine is one not caused by other vehicles. Like, say, a piano falling from a tall building in front of the car.
+Abraxis86
still a kid can run on the road
+jed92y wildlife (think big elk) jumping on the road, pedesterians, ect
I mean, I think you guys are kinda helping my point. Neither of those situations are caused by other vehicles. But honestly, I think there is technology that can be used to mitigate risk from those types of situation. Thermal cameras, for example, would probably go a long way to preventing those. Traveling slowly in high pedestrian areas, which is already in effect, but with far better attention and response from an AI car versus a human. Redesigning roadways is another possibility, like creating animal crossings under major roadways, already being done in a lot of places. There are solutions for almost any situation that you can think of.
jed92y it doesnt matter programmers will still be faced with this dilemma because not every road in the world is going to be immune to wildlife or even has the funds to completely reinnovate their infrastructure...
The colours in this video are INCREDIBLE!
This COULD be countered by another advantage of self-driving cars: instant communication. Let's look at this scenario again.
A heavy load of objects falls on front of your car. It swerves right, but avoids any impact as every other vehicle (being self-driving) immediately moves in sync with your car, organically dissapating the distortion in traffic caused by you not dying. We don't make a dangerous vehicle drive away from the rest, we make self-drving cars and regular cars drive separately.
Ryan Low signals take time to send, recieve, process, and respond to. these cars will not be talking to each other directly, but through servers over an LTE connection (or other long range wireless) so just relaying the signal will take a fair few milliseconds to seconds (probabily a few too many). If cars were to beam directly to one another, not only would that pose major security risks (imagine someone faking such a signal and launching your car off the highway to avoid a car which isn't even there), but handling new connections all of the time will also be a nightmare on the software side of things, not to mention that under poor conditions the signal may not even get through (although this is dependant on tech they use). overall car makers are advertising communication between as a system where cars help other cars in the web to adjust to road conditions etc. there would also be the whole new ethical problem where the speedy response to the relayed signal causes the respondant to have an accident (say slip mid rapid lane change)since physics is the limit to what software can account for (i.e. sudden crosswind in slippery conditions whilst rapidly changing lane is a recipe for disaster). there also the situation where the other cars may not be self driving, which is the most likely real world scenario. regardless, this video is not focusing on avoiding the situation, but how a car would react when something like this inevitably happens.
Good point.
@@zoravar.k7904 so, would for example making each car send their position in realtime to the server, and then, when the accident car sends the signal, the server calculates the order each car in the web must follow, work?
Also, if the car must choose, it will probably choose the smallest financial loss choice. That is, the one that makes you lose the least money.
@@norielsylvire4097 It could work, but that would require every company to use a centralised system. I think a solution where each car avoids the accident using just its own data is the realistic implementation here. Since then each subsequent car would avoid the accident using its own sensor data. And if there are some industry standard response strategies, then a mass response to an accident should be predictable despite the fact that each car is working independently. A beacon system similar to TCAS could also be used where cars tell each other which direction they are headed. But I don't think coordinating strategies would work any better than all cars using a predictable response. Not to mention in such a split second response things like packet/connection loss could mean chaos if we rely on a server. This would also consider non-self driving cars since it is a dynamic system.
@@zoravar.k7904 the reason I had for using a server that way is because if every car uses its sensors and calculations to determine a movement, each car would take some time to receive the information, to process it and respond to it. In that scenario, however, assuming a separation of 1.5m between each car, the server could send an order to 10-15 cars in a line on the other lane to decrease the distance between them by 10% leaving enough space for the car in danger to slip in and avoid every collision. The server could tell every car behind the obstacle to slow down to avoid a collision.
That would be quite effective.
I did think though, of connection loss. A posible solution to it might be to build signal repeaters every now and then or even using Li-Fi under the road to communicate with the cars, but I admit that I don't know if it would be finantially proffitable.
As for all companies having to share a server, that isn't really a problem. Think, for example, of how every company uses WinRAR and doesn't program their own RAR compression/decompression algorithm. In this case, the server would be run by a company dedicated to this server. And every car would just use it the same way every company uses Windows computers or uses light bulbs made by someone else instead of making their own.
In this kind of scenario, the cars would be built to send sensor information to the server, and the server would do most calculations.
Separating the work and letting some companies do a part of the job, and others do other parts is a good way of improving the results.
So, basically, a self driving car would be the sum of the car with sensors built by car companies, and the calculations from this, one server.
I predict that future cars will continually be made structurally stronger and safer for occupants during a crash instead of having programs to tell it to swerve.
+boy638 there still will be accidents
Did I say accidents will never happen? haha
+boy638
but we have those cars today, which means we nee to answer those questions now ;)
+boy638 Well what is your point then? Don't come here and find a "solution" and say the "solution" isn't a solution.
+boy638 I understand your point. The question is, to what extent is 'stronger and safer'? What if the truck in front of them were carrying blocks of concrete? What about logs of wood? Or if they were to spill industrial chemicals down your car? Would 'facing the obstacle' still be a choice or would swerving and 'avoiding it' be much more safer?
In the video, the man said that if we were driving in that boxed in car in manual mode and which ever way we react, it will be understood as just a reaction and not a deliberate decision. Self driving cars are said to be predicted in reducing traffic accidents and fatalities by removing human error from the equation and there can also be other benefits like decreased harmful emissions and minimize unproductive and stressful driving times. This video is talking about the ethical dilemma of self driving cars. He also mentioned that could it be the case that a random decision is still better than a determined one designed to minimize harm. I think that it is telling us about making our own decisions, the correct reaction and response and helping us learn about technology ethics and that although reality sometimes may not play like our thought experiments but it is not the point because they're designed to isolate and stress test our intuition on ethics.
Everything you said is true, but ending the video there does more harm than good. Firstly, these issues aren't specific to self-driving cars; a human would also struggle. Secondly, more emphasis should be placed on the fact that self-driving cars objectively kill fewer people
I love this subject - because not only is it going to obviously save lives - it's also advancing a broader understanding, conversation, and debate about ethics which benefits us all. People are still going to die, but damn, far fewer than today.
Self driving cars would automatically be a safe distance from other cars so they could brake in time
Of course
That's not what is going on here. This is a worst of the worst case scenarios. Everything fails, and the car can only make one last decision.
This is a serious question programmers and self driving car designers have to worry about. Not everything can be accounted for, and not everything will work perfectly. Things fail, and the goal is to minimize such failures. However, there is always a possibility of absolute failure.
Just think about the Apollo missions for a second. They tried to account for every possible scenario, and yet, things still failed and unknown potential errors came into play.
That’s not how hypothetical situations work, there will still be mechanical malfunctions and “acts of god” that will make these kinds of scenarios unavoidable.
We are discussing ethical dilemmas not about how much you like self driving cars.
One thing to consider is that self driving cars almost entirely eliminate human reaction time. This means the car can make decisions significantly faster than humans and dramatically reduce the chance of a collision taking place. Granted there will be some scenarios where a collision may be unavoidable, but in practice given collision rates of self-driving cars vs human-driven cars, especially when you consider evolution of self-driving cars working together on the same road sharing information with each other in order to even further reduce risk, I don't think this will be a problem.
It will be a problem much rarer but it can't guarantee the elimination of similar situations in the future which is why its a problem we need to solve sooner or later, especially since its applicable not only for self driving cars but basically any situation where you need to decide between multiple human lives.
Program the car to take the action that is least likely to result in injury/death. If anyone of the 3 choices given in the video will result in a fatality then it's inevitable and thus the choice doesn't matter so long as the path least likely to result in injury/fatality was taken as the result would be identical if not worse in a manually controlled vehicle. The argument of premeditation holds little water as the individual hit by the vehicle is injured/killed regardless of whether or not the decision was pre-programmed, or instinctual reaction.
+DaCamponTwee But the issue of being responsible remains if it is pre-programed where the programmer has to determine one's probable fate in such a scenario, which they would not have to otherwise, can they do that?
Mohammad Tausif Rafi I have no moral problem with it. It's sort of like the 'murdering 1 person to save a dozen people' scenario with the train that Ted did. It is most logical to take the course of action least likely to result in the most injury.
DaCamponTwee
I agree with you that from that perspective logically it is the best course of action to take.
+DaCamponTwee
I think they implied that hitting the helmeted motorcyclist has a lower chance of killing them. So by the 'least injury/death' logic the car should always hit the driver with the helmet on in this situation. The problem with this is that it's basically punishing the motorcyclist for being prepared and rewarding the other one for being unnecessarily vulnerable
+Jim No, the car should just brake in a straight line. Any incident that means the car can't come to a 'safe' crash speed before impact means the car has a huge risk of losing control if it swerves. The car should almost always brake in a straight line.
Besides we're talking about extremely marginal cases here. The very example exposes that we aren't thinking of AIs properly. The car will not let itself get boxed in. It will not follow a 'risk' vehicle (something that may have cargo fall off it) at a distance that means it can't come to a complete stop before hitting anything that might fall off the back.
Can't the car just slow down so that the damage done will be minimized? The car behind you would also slow down if they also had a self-driving car
Jacob Reddy what if they didn’t have a self driving car
@@thalespro9995 it'd be unlikely, like very unlikely in the example given because, well, all cars are automated in the riddle. it's only things like motor bikes (and that's a stretch) that aren't.
retosius well what if it’s a motor bike behind u
if it was following proper traffic laws regarding stopping distance yes. but in the scenario it says you can't stop in time. therefore it is tailgating.
retosius well in real life people drive cars from different eras, so chances are there would be plenty of people still driving non self-driving cars
1:21 Exactly! This is NOT something we should be highly concerned for now. We should start by turning our focus into developing the technology and educating the general public that YES, self-driving cars ARE SAFER, as they have a much MUCH lower rate of accidents per distance travelled than human drivers do. Cases like the one depicted in this video will rarely occurr and wil defintely ocurr much less than in our current roads, dominated by human drivers.
In my opinion, videos like this one are just damaging the general public's perception of self-driving cars and pushing it's development even further....
I disagree. We should not accept any advance in technology like this without question. Self driving cars, when there are accidents (and there already have been many), present many ethical dilemmas, and we need to be sure the people in charge of solving them will do right by us. We need to think more about what could happen, not just blindly accept progress. I would love to see the day where most cars are self driving, roads are safe, and travel is enjoyable. But not at just any ethical cost.
@@josephcowan6779 you're then putting the idea of justice ahead of the life of many. 'Cause we know for fact that less accidents are caused. The tricky part is just who do we blame. Is it truly that important?
@@MPRF12345 It could be. The tech has only just rolled out. We can expect to see many mishaps with it in the near future, even realizing that in a decade it will probably be much safer for everyone. But in the meantime, yes we need to ask whose fault is it, and is it just?
This video made me question my utilitarian philosophy big-time - going well beyond the efficacy of self-driving cars. I realize now I was just generalizing by saying I believe in utilitarianism - saying stuff like "to minimize harm" or "to maximize happiness". But, as is usually the case when it comes to one's personal philosophies/ beliefs (especially if said "one" is twenty two), it turns out life is tricky and many a scenario could be thought up that leads to conflicting answers using the same basic rules.
Im utilitarian too, but with one exception. I dont expect anyone to sacrifice their own life. Some people are willing to do it and thats fine, but expecting everyone to do it is asking too much. This solves a lot of ethical dilemmas and trolley problem variants where the persons own life is involved. Its not that they believe their life is worth more than the lives of the five people tied to the other track, its just that self sacrifice is too much for most people, and thats perfectly understandable
2:20
neither, both are pretty much guaranteed to die. it's which ever one has a leather jacket AND a helmet one because a leather jacket will actually reduce physical harm.
In the example shown there was ample space to slow down and (partially) change lanes to very close to the motorbike or the other vehicle. The box was a side that could be avoided easily. Self driving cars know *exactly* how big they are, how close they can get and are designed avoid the situation to begin with.
in this particular example I think it should be obvious the car should should apply full brakes and ensure the impact in order to minimize harm to others surrounding the primary accident. I also believe see a future where car accidents can be 100% removed. sensors should have notified the truck its load was unstable.
endure the impact*
+AirTimeEh Well, in that case you would have to have sensors in the strap connected to the vehicle, and how would it know if it is unstable or not because to the sensors it would just think that they were just reducing the load instead of it being unstable.
+vids and clips or vibration sensors in the bed of truck, or cameras on the highway monitoring cars, or by not manufacturing trucks, or having trucking regulations capable of such a flaw.
AirTimeEh I guess you're wright, but still they would have to be very precise on how much it vibrates because it will still vibrate no matter what.
+AirTimeEh And/or 'bots load the trucks. Right now it's people with forklifts and tie-down straps. In the future it could be 'bots and an advanced cargo containment system that works well with them.
Well if you can decide whats ethically correct...well normally if it'd be random anyways, than just program the car to have a random response.
+manuthePROcrastinator You're still programming it to kill, albeit randomly :P
+manuthePROcrastinator You're still programming it to kill, albeit randomly :P
+manuthePROcrastinator If car A is random and car B saves the occupant I'm going to buy car B though.
+Bryan Grünauer Chagas
No choice but to technically be programming it to kill, but at least with this method there would be no bias or discrimination.
*****
There will always be the possibility of manually driven cars on the road, and a system like the one you're proposing would be crazy difficult and time consuming to implement throughout multiple car companies' products.
the thing also is option 4 hit the breaks if there is enouth space between you and the 1 behind you the only thing that happens is you hitting the objects with slower pace and minemizing damage to you and the surrounding. the other is there is still momentum in the objects falling so if they fall and you hit the breaks (and holding them to hit the breaks for others behind you that didnt see you stop) you would minimize damage by still crashing into them but with less damage then needed.
This is one of those cases where the ejector seat that I saw in a movie would be useful. The seat would propel you upwards, which would allow you to leave the car if it crashed.
"if shit happens to you, who should you sacrifice as a consequence?"
Indeed, Jacob.. I think that is the right thing to do.
+Jacob Duus Gundesen except if you go to war, you know what you are getting into, and you probably trust and care about your fellow soldiers. not everyone wants to drive a car that will sacrifice you to save some person that you've never met and don't care about
say you were at a grocery store with 9 other people, when suddenly someone with a gun enters and says "one of you has to die so that the rest can live." what would you do? would you really be so quick to give up your life for total strangers? do you value yourself and your life so little? it's easy to make these decisions on the internet from the safety of your own home when talking about a fictional scenario that probably won't ever happen, but if the time ever came, i don't think it would be so easy to make that choice for real
Ok, but here's the thing, if someone with a gun enter at a grocery store and threatens 9 people, shit is not happening to you.
Shit is happening to 9 people all together.
+Saulo Goki But then again, you can make this argument to any road accident that involves multiple wheicles and drivers. I don't think its about you or them, its about an unfortunate situation.
See, if a bridge with 9 cars falls, it is one big shit happening to multiple people...
If a giant block fall over my car and I throw it over the side lane by reflex, I'm being part now of the shit happening on other people.
...but then, I know this is much more complex than 0 and 1
Simple concept since I can tell you we are a little ways away from putting a system on a car that can determine head gear. My thought brings it back to a more simplistic approach based on instinctual nature, just now programmed. I'm going to give my thoughts in a stepped up version starting with basic and simple up to higher complexities. My view is that of someone going into electrical engineering, as well as a fireman on a department that services two of the most heavily used, midwest interstates. I have also added at the end of each order paragraph a summary of what devices and/or systems will be needed to include that order in a vehicle.
First order: All else fails, vier right, slow down. In countries where you drive on the wrong side of the road, vier left. This is simple as it would cause you to push you or the others into the shoulder away from oncoming traffic. This minimizes the chances that someone will be pushed into an oncoming lane and, assuming everyone is paying attention, will allow the person on that side to divert to the shoulder. This requires no input device other than danger detection.
Second order: Vier toward noted/mapped shoulder including grass areas w/o steep drop. No shoulder, Stop. This will be used in places where there may be a closer shoulder on the opposite side or in the case of construction where the shoulder was moved or on a bridge with no shoulder. This can be maintained by adding a single bit of information to all routed roads accessible to the car and updated based on reads by it or other cars. The basic system requires only a stored map of safe shoulders.
Third order: If all local lanes are moving in the same direction, and there is a large gap, vier towards it, including the inside shoulder of a highway. This is the first step in complexities relying on simple wide angle cameras or any form of distance and object detection already used. In this case, the motorcyclist/scooterist could potentially take the penalty; however, they also have the smallest footprint in a lane and the highest capability of accelerating out of the way. This method will only be used if their presence is not detected, either based on them being a distance from the sensor or any other occurrences that throw off the detection. This requires only local awareness technology already in use.
Fourth order: Large gaps detected nearby by others, vier in direction. This begins a much more complex area whereby the cars start communicating with each other. This networking is already being put in place by modern self driving cars or assisted driving cars. This order relies on the other vehicles informing your car of potential avoidances and where it can vier with the same idea as moving toward the right for the shoulder, only this time its a moving shoulder/gap. This requires sensor' information to be broadcastable locally, possibly over a decided wifi channel.
Fifth order: Communicate to other smart cars to make a necessary gap to allow yours to divert. This is merely an increase from the previous order adding the complexity of all the local cars determining the optimal direction of movement. This requires the computers to link and process all information quickly and accurately, then send out instructions.
Sixth order: No detectable oncoming traffic, move to oncoming lane, then back immediately. This is put farther down the complexity tree as it would require a good bit more in communicating systems before relying on a car to determine an oncoming lane to be the correct choice right away. Even if it is observably empty. This requires sensors either on the road or for every vehicle to be required to broadcast position reliably to a system immediately accessible to the vehicle.
Through these six orders you might note that I haven't actually addressed the given matter directly. This is because that would rely on deliberately deciding to hurt someone. I indirectly stated my decision in the second order though by saying that if the car can determine no shoulder, it is to simply stop. This decision is due to my experience from recorded crashes and participating at accident scenes as a fireman. If there is a shoulder and people can move to it, damage and harm will be minimized by diverting in that direction. However, if there is a barricade on both sides, such as is often the case with a bridge, then diverting will cause more damage and all involved passengers will end up hurt. If there is a guarantee that everyone will be hurt extensively, then minimizing the number hurt becomes priority to improve the ability to save those hurt.
Finally someone who gives a logical answer. While yes, it doesn't directly answer the question at hand, it does, in essence, solve it. In these situations, you do give an answer based on possible scenarios, understanding that there is a possibility that someone gets hurt.
Most people in these comments want to simply say break or question the validity of such a question, but you actually address possible programming solutions.
How about rolling a quick die? The car has three options: if it lands on 1, it turns left; if it lands on 2, it stays straight and the passenger dies; if it lands on 3, it turns right.
The solution to the dilemma is to have the cars network with each other so that they always leave each other escape routes.
This would not require all cars to be self driving and networking either as long as about half of them were, and before we get to that point, simply program the self driving cars to follow at a greater distance.
That's easy. The self-driving cars can be programmed not to go beyond 25 Mph, and to keep some distance from other cars. In that case, the decision will be which driver will be slightly injured...
This is why vehicle to vehicle communication is SO important. If a car can communicate with all other vehicles in its vicinity and those vehicles react to one another, together they can find a way to minimise damage for everyone involved.
but at some point, some computer has to make a decision: who's life is prioritized in the event of unavoidable catostrophe? And at that point, should the decision lie with the autonomous machine or the human? It is a case of real life trolley-problem that we will soon face.
yea this lowers the chance of a crash happening, but does not take into account pedestrians walking road conditions etc. so the ethical decisions still have to be programmed and we still have the same issue.
Thank you. Well done. A good question is better than a good answer.
Try to hit everyone
**MULTI-LANE DRIFTING**
Let's make this a thing
Y e s
Thank you Lukas Vivian, very cool
Gotta get that high score
Real solution: The self driving car shouldn't be following that closely to the vehicle ahead, especially a truck loaded with cargo
It's hard to take sides but this dilemma will make our society better to make self driving vehicles more safely than the old one
False dilemma there as in the first place the minimum separation distance could have been designed to give the car the necessary stopping distance to avoid any swerving.
2:05
Minimizing harm would mean keeping straight, and not driving into either cyclist.
Just go in the direction of least object resistance (scan and see where there is the least stuff/biggest gap). Additionally invest in front and rear signaling that a stop is about to occur such as a louder horns and emergency blinking leds
i feel like in the future we'll all have rateings like in black mirror s3ep1 and those with the lower rank are most likely to be killed
Social scores are already in China: ruclips.net/video/y5-0llHaZDs/видео.html .
Ya make sense
That’s a good thing
@@alexwang982 no, thats not a good thing.
Its easy, it should just break. If it goes left or right it will cause a massive crash.
+Viktor6665 in this scenario it said the car doesnt break in time and that option would cause the most damage to the self driving car instead of swerving left or right
+MrTopGunnar If you cant break in time you didnt keep the necessary distance. You dont have the right to danger others.
Viktor6665 ok if you cant wrap your head around this one imagine a 500lb elk jump on the road instead of an unsecured load
I agree with Viktor.
Anyways it is a made up dilemma. Build self driving trains and trams to keep the schedule. Driving car is a life experience, it should not be automatized.
+MrTopGunnar You always need to break. Thats the best option 99% of the times. Going into a tree or a ditch instead wont help.
The best way is to always maintain a minimum safe distance between the vehicle on the front in which we the car can safely stop itself without swerving and the distance will vary according to the speed of the vehicle
The answers are obvious: The car will do everything in it's power to save it's occupants, but it will never deliberately hurt someone else to do so. Furthermore, as previously mentioned, if the cars could communicate and connect with other self driving cars (within a certain range), they could coordinate to avoid accidents--for example, the car on the left could move and give room to the central car, etc.
I disagree I don't think the answer is obvious. The car may do everything it can to save the occupants if thats how its programmed and what about if the car will do everything it can to prevent a major accident however in doing so could cost the occupants their lives? You say that the car will never deliberately hurt someone to save the occupants but you have absolutely no way of knowing that for sure. The video already demonstrates this that what if the only way for the car to protect to you is to purposely slam into someone else? That is deliberate. The cars communicating with each other is a good idea and again what if there is something that that AI cannot do or the AI calculates that if the car moves to the left it could cause another huge accident? Is it more important to protect the occupants or reduce the severity of an accident regardless of risk to occupants?
I have thought about this for a long time, even before I watched this video. The conclusion that I came to was that I think the only possible solution for manufacturers is inaction. If an ethical dilemma is detected, do nothing. If it kills the driver, so be it. If it kills 10 toddlers, let physics take its course. Otherwise, it runs into an ethical minefield that nobody want to tread.
Except slam on the brakes maybe
Surely it's better to swerve into a wall if there's a murderer in the car than hit 10 children
@@grassytramtracks I was talking about a scenario where you are given only two choices. And hitting the wall may also kill the driver.
@@avonord
A car that is programmed to take no action to save the car owner will never sell. Who would buy a self driving car which is specifically NOT designed to save your life? Surely, to the car owner, saving one's own life is the highest priority. Consumers would buy a car that is best for saving themselves.
@@l.n.3372 would you drive into a school of children rather than a wall to save yourself?
technically speaking you do have time to stop. as seen here to the left: 1:05 there are no cars behind you. the story also seems to take place on the highway. by stopping suddenly your speed would drop fast which may help you avoid the heavy objects. and chances are the car will stop suddenly anyway if the objects suddenly appear in front of it.
The best choice for the car manufacturers would be randomized decision making
Each case would have equal probability of happening
In simple words if the same scenario is repeated 3 time , each outcome will have a probability of 33%
Nope. I don’t think that will be right decision. Would you bet your life ( quite literally ) on the outcome of a coin toss ? I know I won’t. So we have to come to conclusion to what the priority will be.
This is literally betting lifes Lmao ☠️
Two suggestions: 1) Limit the car's ability to make value-based judgement, such as modeling other vehicles as mere objects so they're all options are equivalent. This would be more like the current situation since humans can't process enough information at that speed. 2) Program two modes a) altruistic mode which minimizes harm even if the driver must be sacrificed sometimes, and b) selfish mode where the car would try to protect the driver under all circumstances. The car would default to selfish mode, but the owner can switch it.
In my opinion, there should be a middle ground where the car will always try to protect the driver but also minimize harm by choosing the object which is least likely to hurt anyone
this is why some companies are doing mass surveys, then telling the car what the majority of surveyor's answered, that was it is, in a way, transferring the decision to the people
There should be different lane roads for different vechiles
We have failed versions of that, other than bus lanes that most people tend to respect, truck lanes unfortunately do not work as intended, and neither bike lanes. The amount of people who still drive their cars over that tiny path is sad.
Want to kill someone? Find out when they are driving close to a cliff and hire 10 people to stand in front of the road! The car will drive itself off the cliff to save the 10 people. Genius!
Lmao
@@ashleapatterson82 this is a worse case scenario that he is talking about, and there is no time to avoid a collision. You can try and break all you want, but breaking doesn't make you immediately stop.
The answer is to swerve as minimally as possible (for a near miss) in the direction opposite of where the objects are falling to; and the reason this isn't a problem is because if most people are using self driving cars they should be able to cross communicate which means whatever is in the way of the direction that the car swerves in will react to avoid collission and if that were to cause another collision this would be repeated until there are no more oncoming collisions. This "dilemma" is vastly underestimating the capabilities of AI including the speed at which decisions can be made, it could potentially decide to drive further away from the car with the objects on it so that it has more time to brake and avoid accidents, and the speed at which it can send messages to nearby self-driving cars.
If we had a mandatory feature in all cars and motorcycles to have a self-driving AI that can override the manual driver for any such collisions, this could never be a problem. I.e. to solve this we have to reverse the current situation of autonomous driving, instead of self-driving requiring a human observer, human driving should require an AI observer. At least on public roads.
In other words, I think this video is completely idiotic, a dilemma created by someone who falsely thinks he understands what he's talking about.
The video is based on a very misguided premise: An action of a human in a emergency situation like this is instinctive/impulsive, while that of a robot car is premeditated/programmed.
If programmed strategies are to be called pre-determined/planned/premeditated, then the impulsive neural pathways in a human body (which have developed over years through experiences/learning) that are responsible or taking the action should also be called pre-determined/planned/premeditated. At emergency situations like this, neither a computer can crunch moral algorithms to decide a strategy. It needs to respond using a very low-level algorithm based on the obstacle states around itself. Very much like the short-circuited neural pathways that tell a human what to do in that situation.
This is an interesting philosophical debate for those who don't understand how AI works, but does not or will not have any practical relevance.
+subh1 but is the car not like a calculator? Give the inputs and it will provide a quick output. It's predetermined based upon the scenario.
Eric O In that sense everything in the universe is predetermined. If you know the state of every elementary particles in the universe, what the state of those particles will be a minute from now is completely predetermined. "Free will" is just an illusion. If you carefully set the initial conditions to be same, at the level of atoms, and run the same scenario twice, the exact same thing will play out both the times. Of course it may be very sensitive to initial conditions, as chaotic systems are. But that's true both for humans and algorithms. Humans are not beyond the laws of physics or out of the universe (unless you believe such things). The exact same input and initial state (created by the history of inputs) will evolve to exact same state in the future.
+Eric O Yes, but not necessarily in a way that the programmer can know. Self-driving cars use machine learning where they learn from past experiences (and in the case of Tesla, from the experiences of every other Tesla car in the world). For example, it might just do what another car did in a similar situation which allowed it to get out safely.
this may seem pretentious and a little hard to grasp but the "random reaction" that humans make isn't really random, isn't it? there's a lot that goes in our actions that is explicitly decisive in our subconscious and not in our conscious mind. what a lot of us consider instinct and/or intuition isn't really that, is it? it's a lot of moving parts that we take into account and act accordingly. that's why objectively correct decision making can feel off and how when there's a missing piece in a situation, it can likewise feels off without us knowing the better. idk im rambling, food for thought?
My guess would be that the car would swerve towards the motorcycle. This would not be out of malice, but it would be due to the fact that there is more space for the car to potentially avoid a crash if it moves in that direction. In other words, the car would move away from the more blocked off area and try to avoid any crash regardless of if one is inevitable.
Better solution, no self driving cars, only trains
Interesting point near the end: If we don't want to make all these tough decisions, we could still just use a randomizer.
I think that in the future as more and more vehicles become "self driving" they will be connected together so vehicle avoiding deadly accident etc. a kid on road would tell the car next to it as well so they would manover together so that noone would get harmed or the damage and danger is minimal
Wow, I never considered this. Pretty Deep... although, I believe autonomous vehicles are programmed to keep a certain following distance.
While true, programmers do have to consider undesirable and unlikely situations. That's really the basis of the problem. It's easy to account for the most common problems and situations. It's hard to try and account for the rare and ethically challenging problems.
Here is how it will be: The car will hit those who pay the least to the manufactorer for not being targeted. So it's gonna hit the poor.
Safrus Salmus itll prob be per-brand. So it'll hit whatever brand paid least.
That kind of direct "pay to not be killed" plan would become public and unpopular too quickly. I'm not saying that something similar might not occur, but it would at the very least be more circumspect and harder to see through.
+Anthony Bowman Planned obsolescence is widely accepted, people would not even notice it.
Depends on how much facebook coverage it gets I guess :P
if the car company wants to be sued to oblivion. damage to property is easy to pay off. damage to people and medical bills is much harder
To me (as a current C.S. Student, and 20 years of professional electronics and programming experience), the solution is actually quite simple. Don't allow any A.I controlled vehicle to create a secondary accident. The only choice is for the vehicle to crash into what is ahead of it. First off, if the vehicle following the truck is following normal rules of driving (2-3 second rule), then there is no reason why the vehicle can not come to a stop before colliding with the objects falling from the truck. If there was in-climate weather, than the 2-3 second rule should be adjusted accordingly, and it should still be able to stop. If the owners of the vehicle did not perform proper maintenance to allow the vehicle to stop properly, then the occupants around them should not be penalized for the owners negligence. In addition, if the vehicles around them are all A.I.controlled, then swerving into those vehicles could cause them to react in a way to create tertiary accidents. If the initial vehicle maintains its course, those secondary and tertiary accidents are avoided. Finally, with safety features of vehicles today (which I find annoying are cited in the video to justify a secondary accident, but not in defense of the vehicle hitting the objects falling from the truck), I feel it is more likely that the occupants of the vehicle will survive the accident, without the need to endanger anyone else around them.
I'm still not understanding why it wont hit the brakes ..
Cause even if it does it's speed will carry it into the cargo regardless. It's either bang up the front of your car or swerve into the surrounding traffic. Besides the situation is purely theoretical in the first place
Actually it should have enough room to stop, because the boxes are not coming to a dead stop as they fall out the truck, they are just decelerating. Your car is able to decelerate faster if your not tailgating.
Eh... as soon as they hit the road, they would slow down significantly, even if they do roll a little. The car is depicted relatively close to the truck, so there's a good chance that they'd hit the car before it would come to a complete stop. Though, either way, it still is a theoretical scenario... you get the point it's trying to make.
The issue with this is that a self driving car wouldn't put itself into a situation like this, especially with a truck carrying cargo.
I would argue that the kind of programming needed to
determine if a car is safe or not is so much more complicated than just driving that it won't appear until well after self driving cars are common
Thankfully, driving so close to a car that something falling off will hit you no matter what you do is already against the law--regardless of if that car has anything that will fall off. SO, just program the safe distance for all cars, regardless of tie-down.
Oh what's that? That's what the obvious answer was all along? Yeah, that's right. this scenario would never happen.
Nicholas imagine a car is tailgating you. You cannot stop him from tailgating. The truck now pulls into your lane. You have no other option. You cannot just say to avoid every situation. It is like telling a computer to avoid viruses by turning off internet. It is not a viable solution. There will be one moment when this happens, no matter what we do.
Nicholas Wright make it a child walking into the street with no time to safely avoid without causing a crash, so you either run the child over or crash
It’s a basic scenario that could likely happen and even you can’t logically come up with a solution.
I agree the fact that driverless cars or self driving cars are good in terms of safety. However as car enthusiast it brakes my heart that I won't be able to go out for a nice drive like we do today.
"And the robot car is now meting out street justice."
There it is folks. TED-Ed confirms self-driving cop cars in the future.
The thing is that the companies will almost always choose to save the driver of the car because that's a lot more attractive to potential consumers. Sadly, the only way that any other option such as utilitarianism could be implemented would be government intervention.
The least a authoritarian controlling company like Google could do is prioritize their customer's life over other's. Or else why would anyone buy something from Google. (Google is a example)
In my opinion, it should always favour the lives of others over the passengers and drivers. If you choose to drive in a self driving car, you are accepting the risks (which, may I mention, are less) and show you are willing to take the hard end of the stick if it does end up in a crash.
But that doesn't make much sense. When you drive a car in reality, you also accept that driving is dangerous but YOU still have control over your steering wheel. Self driving cars would never turn a profit if the consumers know going in that their cars don't care about them. Why would anybody buy a self driving car vs keeping their normal vehicles? In this scenario, nothing has changed and nothing about the road is safer than currently.
If the cars were smart enough to choose who they crash into surely they would have the correct following distance to stop in time
You're not getting the point
Tien Doan But when the technology should be able to hypothetically take a third option, moral dilemmas are tough to do
"And now your self-driving car is dealing out street justice." Ha. That was funny.
Honestly; randomize between which to hit, and don't collide with the boxes. I don't think it's moral to decide who gets to live or die; both are just as deserving of life.
What I learned from this: Don't tailgate semis while in a self driving car.
Dark and dangerous times lie ahead.
same thing I said
No car should be allowed to move out of its line. If some heavy objects fell on your line, first car should keep safe distance from car in front, so it could stop car in this distance and press brakes. Because moving to different side does not mean it would save car passengers, because car behind on other line can just heat your car in the middle and make two cars from one, killing all passengers. And it also will result in chain reaction as your car will become heavy object on road for other cars to avoid. So, all self driving cars should keep safe distance depending at speed and road condition and stay on they line.
I think a good solution would be to have an "emergency" button where you can press to take over. Then if there's an accident it'll be considered a reaction
Except you would only think to press that button when you’re actually about to crash. And unless you have inhuman reaction speeds, it’s probably too late to do anything.
bruh why not just program the car to fly over the object
Yeah that is the way out .
🤓GTA Addict 😅
Just kidding
…and land in space😛!??
That's actually pretty fascinating and crazy because you're directly defining "ethics" and the futuree
Why not just slow down as much as possible? You minimize risk to all parties
Sam Wendt You came up with a way to reduce the possibilities, but you fail to answer the question told.
Tq
this scenario might happen but the car would be at an appropriate flowing distance and could also be sophisticated enough to tell if the load is unsafe and tell the lorry ahead to pull over. an interesting topic but I think it can be ultimately avoided with better technologies
you should only be following a vehicle at a distance you can stop at.. shit has fallen off back of trucks and i havnt died because i keep a gap... maybe a self driving car can react faster and apply brakes harder sooner and fit more cars into smaller space and have less traffic sign me up
ronald rogerson That's nice, but it's time to see past the example.
0:53
시간단축 ㄱㅅ
ㅂㅅ
Very interesting video in the ethics of what should we allow computers to do and what desicions they should be able to make.
This is a wrong exemple. This situation should not exist. The self driving car should not drive so close to the truck. If the car were driving further away from the truck, it would have had time to stop and no one would have been injured.
3 second rule.
I'm afraid you're ruining the point.
Even if cars were not that close, there would be similar situations. Maybe not exactly same ones, but very similar ones.
That's why we have to think about these kind of situations and decisions, I guess.
The car will follow the traffic rules. It will minimize harm to the driver but still follow the rules of traffic. It will not start an internal AI based ethical dilemma debate in the 1/10th second it has to react to the data. So if a crash can't be avoided, it will not kill an old lady on the curb because she seem to worth less to society then the young driver of the car (except she is the CEO of a big company and important to keep thousand of jobs... better use face recognition software and google to check her profile in that split second...).
It will slow down as much as possible with a quicker reaction time any human has. So in reality the system will prevent a crash from happening in the first place by reacting to events you can't even see with your eyes but the radar detects.
F. S. Ha ha, if they go with face recognition, I imagine teens will go out wearing masks of famous people and jumping in front of cars, trying to see who the car will choose to "save".
iwasfrancisd Yeah, because that would help them. You can't wear the mask all day long and if you do, you are identified by that mask and your cloths and the home you left from. If a teen does jump in front of a self driving car it would be identical to a teen jumping in front of a car with a regular driver now. If the driver can break early enough the result will just be some angry shouting and if he teen gets crushed it won't be the drivers fault. Driverless cars then can even prove that it was 100% the fault of the dead teenager and the car won't have post traumatic stress problems like a real human. So a win-win for driverless cars.
F. S. Not for protection, but for fun. Like a future version of "Running of the Bulls".
iwasfrancisd Well, if some teens want to win the Darwin Award they can and will do so. Self driving cars won't change that one way or the other.
3:09 to 3:25 some how i taught that sometimes a deliberate decision may misleading to the disirable results ❤
Jesus Christ.. The self-driving car will fucking brake and avoid hitting the objects.
+Euler Characteristic They said at the start there wasn't time.
+Jaden Oldfield
Even if there is not enough time, I believe that the self driving car will still break, reducing the amount of damage and lower the risk of harm to the passengers and other motorist.
+Euler Characteristic That's also not the point of the video.
Kenny Tran Yeah thats true.
Also, if the car could come to a complete stop that fast the inertia would kill the passenger. Lol.
Jaden Oldfield
hah. Yeah. A complete stop would be horrible just like this scenario. I still think slowing down would be a better option.
be so scary sitting in driveless car and software malfunction and have a crash
Your body is more likely to malfunction than the software so that's a pretty irrational fear
Your body is more likely to malfunction than the software so that's a pretty irrational fear
In reality, the self driving care would immediately honk the horn to alert nearby drivers, slow down as needed to get behind a vehicle on either side (also not approaching the objects as quickly), and turn into the lane with the largest opening between vehicles with the assumption that the car it is turning in front of will slow down to let them in, this can be done in seconds to avoid hitting anyone or anything. I have had cars turn into my lane quickly and I have always slowed down to avoid hitting them. The option chosen is the one with the highest probability of avoiding all collisions - this is easily calculated by a computer; its actually pretty obvious if you stop to think about it for a moment.
Surely it can just break and swerve to the left just missing the SUV
luke
Brake not break
that will hit the person behind you
couldnt the car just brake
If you drive intelligently, you should always leave enough space between you and the vehicle in front so that if it stops abruptly (or sheds its load), you can stop within the available space.
A self-driving car should be able to calculate this stopping distance based on speed and road conditions (dry, wet, icy etc) and braking performance, and avoid the problem altogether.
well christ, that music is frickin scary
Agree