• Welcome to ROFLMAO.com—the ultimate destination for unfiltered discussions and endless entertainment! Whether it’s movies, TV, music, games, or whatever’s on your mind, this is your space to connect and share. Be funny. Be serious. Be You. Don’t just watch the conversation—join it now and be heard!

discuss AI powered Robot Police

A thread covering the latest news on trends, groundbreaking technologies, and digital innovations reshaping the tech landscape.
Joined
Nov 13, 2024
Messages
256
Impact
97
LOL Coins
Ṩ1,036
I work with law enforcement. Therefore, I keep up with emerging topics in law enforcement and investigations.

Artificial Intelligence-powered robotic police technology is being developed to help make enforcing laws safer for the human side of police work.

Imagine a robot cop using machine learning to determine if you're a threat to the public enough to decide to use deadly force against you. Imagine it being wrong because AI isn't always right.

Any thoughts about this?

At some point, when is it time to say NO to using AI? When is it time to let humans keep doing human jobs that require human decision making?
 
The greatest fear with AI-driven robot police is the lack of accountability and trust in technological precision. No matter the enhancement in safety, we cannot allow for AI to make life or death decisions due to its inaccuracies.

Here' s a scary scenario with this:

Right now, when human law enforcement operators are forced to use deadly force, they may or may not be deemed as innocent. They may be considered a killer and jailed or their life can be ruined.

If a robot kills, the human doesn't have to take the blame. We'll just terminate that robot and improve them next ones.

Scary thought but could it become a reality?
 
The potential for mistakes, bias, and lack of transparency is alarming and don't let me start with the importance of considering the emotions and psyche of any individual required to interact with a robot cop. Human emotion and judgment are crucial when it comes to policing.
For sure.

I think we will see this experiment in the war theater, first. After that, I think it will become popular to move it to emergency services even if at first, AI is not allowed to use force.

But I do look at it this way.

It would be seen as safer to send in a robot to terminate a threat than SWAT operators who might get hurt in the process.

AI will start to kill our jobs for sure, hopefully it won't start killing us, as well.
 
Imagine a robot cop using machine learning to determine if you're a threat to the public enough to decide to use deadly force against you. Imagine it being wrong because AI isn't always right.

Any thoughts about this?
I've seen Terminator. No, just, no.

A robot's actions cannot be held against it, and the LLM used to determine the threat would just be cleaned up, with nobody held accountable for the poor decision it made.

For the use of deadly force or apprehension, leave it to the people.

It would be seen as safer to send in a robot to terminate a threat than SWAT operators who might get hurt in the process.
Who determines the threat, though? If it's a person that can be held accountable for sending in an AI robot, sure. Then, we can put that supervisor on blast if the AI made the wrong call or even if the supervisor made the wrong call, assuming they were a threat when they weren't.

SWAT operators who might get hurt in the process.
SWAT operators are very good at what they do. Remember, they're not always there to eliminate a threat.

There are DV calls and hostage situations as well. You would then need to send in "friendly" bots, which people might not trust, to help them evacuate the area before the terminator bot comes in. And, sometimes, that can't be done.

Human judgment and teamwork to accomplish the mission would be the best, in my opinion. They signed up for SWAT knowing the outcome they may face, but they go in and do their job anyway, and not because it's a job, but because they like to serve their communities.

AI could surely help 9-11 operators, though, at least in the first stage of collecting the necessary information that could slow down the response. Then, the callers could talk to a live person and give updates as necessary for the operator to pass on the information to the responders, be it EMS or the police.
 
Who determines the threat, though? If it's a person that can be held accountable for sending in an AI robot, sure. Then, we can put that supervisor on blast if the AI made the wrong call or even if the supervisor made the wrong call, assuming they were a threat when they weren't.
Remember Christopher Dorner, the ex-LAPD cop that lead to a manhunt? They used a robot to kill him. Not AI-powered but imagine what they could do now?

Probably use drone swarms.

 
Back
Top