I work with law enforcement. Therefore, I keep up with emerging topics in law enforcement and investigations.
Artificial Intelligence-powered robotic police technology is being developed to help make enforcing laws safer for the human side of police work.
Imagine a robot cop using machine learning to determine if you're a threat to the public enough to decide to use deadly force against you. Imagine it being wrong because AI isn't always right.
Any thoughts about this?
At some point, when is it time to say NO to using AI? When is it time to let humans keep doing human jobs that require human decision making?
Artificial Intelligence-powered robotic police technology is being developed to help make enforcing laws safer for the human side of police work.
Imagine a robot cop using machine learning to determine if you're a threat to the public enough to decide to use deadly force against you. Imagine it being wrong because AI isn't always right.
Any thoughts about this?
At some point, when is it time to say NO to using AI? When is it time to let humans keep doing human jobs that require human decision making?