• Welcome to ROFLMAO.com—the ultimate destination for unfiltered discussions and endless entertainment! Whether it’s movies, TV, music, games, or whatever’s on your mind, this is your space to connect and share. Be funny. Be serious. Be You. Don’t just watch the conversation—join it now and be heard!

discuss Decision-making: Can we trust AI?

A thread covering the latest news on trends, groundbreaking technologies, and digital innovations reshaping the tech landscape.
Joined
Oct 18, 2024
Messages
469
Impact
38
LOL Coins
Ṩ1,628
Many businesses and individuals are increasingly relying on Artificial Intelligence (AI) and Machine Learning (ML) in decision making. However, there are concerns about bias and fairness. Can we truly trust AI systems to make objective decisions? AI systems are only as good as the data they are trained on, and if that data is biased, the results will be too. Let us discuss the implications of AI-driven decision-making and explore potential solutions to mitigate the associated bias.
 
AI doesn't have deductive reasoning and can't make decisions for different circumstantial contexts. I can never trust AI for decision making. The best I can use AI for in relation to decision making is to run analysis. But the decision making is for humans.
 
Have you heard about AI self driving cars? It's getting so much attention but being the kind of person I am, I would never trust an AI do drive me and not kill me. When it comes to decision making, AI will be the last thing I would allow do it for me.
That's a valid concern. Self-driving cars are definitely dangerous, and of course, people can doubt if AI can make critical choices. But now as it stands, AI has made so much progress, but it is not perfect and can make mistakes. That is why human beings are required to provide their supervision at critical moments.
 
I would never trust an AI do drive me and not kill me
That poses the question too: What if it could prevent you from being killed by making a split-second decision to avoid a collision that you couldn't tell would even happen because it was in your blind spot, but with the calculations of your speed and direction, along with the speed and direction of the other vehicle, it could slightly veer and stop you where your car wasn't touched at all?

On the other hand, I often worry about the decision-making of that, too.

What if it could prevent you from being killed by something coming at you in a blind spot, but to do so, it had to steer into a baby carriage? Would the car's AI want to protect you over a mother and child?
 
That poses the question too: What if it could prevent you from being killed by making a split-second decision to avoid a collision that you couldn't tell would even happen because it was in your blind spot, but with the calculations of your speed and direction, along with the speed and direction of the other vehicle, it could slightly veer and stop you where your car wasn't touched at all?

On the other hand, I often worry about the decision-making of that, too.

What if it could prevent you from being killed by something coming at you in a blind spot, but to do so, it had to steer into a baby carriage? Would the car's AI want to protect you over a mother and child?
A self-driving car has to decide on which way to go or probably end up with. This demonstrates the problem of attempting to teach an AI system to be moral when making a decision especially in extreme conditions. If any mistake occurs, it can cost human life. I can't seem to find myself in a self driving car.
 
A self-driving car has to decide on which way to go or probably end up with. This demonstrates the problem of attempting to teach an AI system to be moral when making a decision especially in extreme conditions. If any mistake occurs, it can cost human life. I can't seem to find myself in a self driving car.
Self-driving cars can be dangerous. It's like allowing the AI to make decisions in matters of life and death. This is a threats to human beings and that is why anyone would be concerned about relying on artificial intelligence in such circumstances.
 
That poses the question too: What if it could prevent you from being killed by making a split-second decision to avoid a collision that you couldn't tell would even happen because it was in your blind spot, but with the calculations of your speed and direction, along with the speed and direction of the other vehicle, it could slightly veer and stop you where your car wasn't touched at all?

On the other hand, I often worry about the decision-making of that, too.

What if it could prevent you from being killed by something coming at you in a blind spot, but to do so, it had to steer into a baby carriage? Would the car's AI want to protect you over a mother and child?

There are so many potential probabilities and variables than might be involved which is why I don't and can't leave AI to make that decision for me. Can AI be held accountable if its decision making that didn't go well? I don't think so. I'd leave myself to decide that until further notice.
 
Have you heard about AI self driving cars? It's getting so much attention but being the kind of person I am, I would never trust an AI do drive me and not kill me. When it comes to decision making, AI will be the last thing I would allow do it for me.
I get your point. Should there be something wrong with the program during self driving, you will have nobody to blame but yourself for entrusting you life to a machine.
 
When AI is deployed for making ethical decisions, it won't do better than the ethical standards of it's developer who would most likely program it that way. And that might not confirm with optimal ethical standards.
 
Back
Top