CSO  >  Botnet  >  Robots amid a blue binary matrix

Are bots too unsafe for law enforcement to use?

In late November, the San Francisco Board of Supervisors Been voted 8-3 to give to the police Option to launch potentially lethal, remotely controlled robots in emergencies, leading to an international outcry over the use of “killer robots” by law enforcement. San Francisco Police Department (SFPD)who was behind the proposal, He said They will deploy bots equipped with explosive charges to “contact, impede, or confuse violent, armed, or dangerous suspects” only when lives are at stake.

Missing from the heaps of media coverage is any mention of how digitally secure a lethal botnet is or whether an unprecedented vulnerability or malicious threat actor could interfere with the functioning of a digital machine, no matter how skilled a bot operator, with catastrophic consequences. Experts warn that bots are often unsafe and vulnerable to exploitation, and for these reasons alone, they should not be used with the intent to harm humans.

Proposal for armed robots for SFPD is under review

The law enforcement agency argued that the robots would only be used in extreme circumstances, and only a few senior officers would authorize their use as lethal force. The SFPD also stressed that the robots will not be autonomous and will be operated remotely by officers trained to do so.

The suggestion came after the SFPD struck language from a policy suggestion related to the city’s use of its military weapons. “Bots may not be used as an use of force against anyone,” said Board of Supervisors Rules Committee Chair Aaron Peskin. The removal of this language cleared the way for the SFPD to modify any of the department’s 17 robots to engage in lethal force actions.

After public outcry over the robots’ automated killing prospects, the Board of Supervisors reverse itself A week later and voted 8-3 to ban police from using remote-controlled robots with lethal force. The moderators separately sent the original killer bot provision of the policy back to the Board Rules Committee for further review, meaning it could be sent back for approval in the future.

Robots are slowly moving towards lethal force

Military and law enforcement agencies have used robots for decades, starting with the mechanical devices used Explosive ordnance disposal (Explosive Ordnance Disposal) or, more simply, Bomb Disposal. In 2016, following the death of five Dallas police officers during a rally for Alton Sterling and Philando Castile, the Dallas Police Department Deploy a small bot Designed to safely investigate and discharge explosives. They killed sniper Micah Xavier Johnson using what was likely to A 10-year-old robot while keeping investigators safe, in the first known case of an explosive-rigged robot disabling a suspect.

Recently, police departments have expanded applications of robotic technology, including a dog-like robot from Boston Dynamics called Spot. Massachusetts State Police Use Spot temporarily As a “mobile remote monitoring device” to provide soldiers with images of suspicious devices or potentially dangerous locations that could harbor criminal suspects.

In October 2022, the Oakland Police Department (OPD) has taken the concept of killer robots to another level suggestion To equip its stable of bots with a “non-electric percussion trigger” in the form of a shotgun or PAN disruptor, which directs explosive force, usually an empty shotgun shell or pressurized water, at suspected grenades while human operators remain a safe distance. The OPD eventually agreed to language that would ban any aggressive use of bots against people, with the exception of the use of pepper spray.

Other robots are developed for Spread Chemical agents in the confrontation scene or Use tasers To disable violent suspects without putting police officers at risk of harm.

As robotics creeps in, a group of six leading robotics companies, led by Boston Dynamics, have released open letter In early October, he called for general-purpose robots not to be weaponised. “We believe that adding weapons to robots that are operated remotely or autonomously, that are widely available to the public, and that are able to navigate to previously inaccessible locations where people live and work raises new risks of harm and serious ethical questions,” the letter states. . “Weaponised applications of these newly capable robots will harm public confidence in the technology in ways that damage the enormous benefits it will bring to society. For these reasons, we do not support the weaponization of general-purpose robots for advanced mobility.”

Bots have a track record of insecurity

Due to the increasing prevalence of bots in military, industrial, and healthcare settings, much research has been done on bot security. Developed by academic researchers in Jordan attack tool to perform specific attacks. They successfully breached the security of a bot platform called PeopleBot, launched DDoS attacks against it, and stole sensitive data.

Researchers at IOActive Try to hack Some of the most popular household, commercial and industrial robots available in the market. They found serious cybersecurity issues in several bots from multiple vendors, which led them to conclude that current bot technology is insecure and vulnerable to attacks.

Trend Micro researchers looked at How can botnets be hacked? They found that the devices they studied ran on outdated software, weak operating systems and libraries, weak authentication systems, and changeable default credentials. They also found tens of thousands of industrial devices located in public IP addresses, increasing the risk of attackers accessing and hacking them.

Written by Victor Mayoral-Filch, founder of botnet security firm Alias ​​Robotics Android hacking guide Because “bots are often shipped unsafe and in some cases completely unprotected.” He asserts that defensive security mechanisms for botnets are still in their infancy and that botnet vendors generally do not take responsibility in a timely manner, which extends the zero-day exposure window to several years on average. “A botnet can be hacked physically or remotely by a malicious actor in a matter of seconds,” Mayoral-Vilches tells CSO. “If they are weaponised, losing control of these systems means enabling malicious actors with remotely controlled bots, which potentially be deadly. We need to send a clear message to the citizens of San Francisco that these bots are not safe and therefore not safe.”

Earlier this year, researchers at healthcare IoT security company Cynerio reported that they found a set of five critical zero-day vulnerabilities, which they call JekyllBot: 5, in hospital bots that enabled remote attackers to control bots and their consoles over the Internet. “Bots are incredibly useful tools that can be used for many, many different purposes,” Asher Brass, Cynerio’s head of cybernetwork analysis, tells CSO. But, he adds, robots are a double-edged sword. “If you’re talking about a lethal situation or anything like that, there are huge flaws from a cybersecurity perspective that are very scary.”

“There’s a real disconnect between leadership in any position, whether it’s political, hospital, etc., in understanding the job they’re voting to approve or approve, versus understanding the real risks out there,” Chad Holmes, cybersecurity evangelist at Cynerio, says CSO.

Steps to improve automated security

When asked about the specific SFPD robots included in the Military Use Inventory, machines made by robotics companies REMOTEC, QinetiQ, iRobot and Recon Robotics, Mayoral-Vilches says many of these systems are based on the legacy Joint Architecture for Unmanned Systems (JAUS) international standard. “We have encountered JAUS applications that are not up to date in terms of security threats. There is not enough dialogue about cyber insecurity between JAUS service providers.”

According to Mayoral-Filch, the best option for safer bots would be the “most modern” Android OS 2 (ROS 2), an “alternative automation system that increasingly shows a growing concern about cyber insecurity.”

Not only is the manufacturing of the devices themselves a concern, but how they are deployed in the field and the operators who deploy them. “It’s not just about the hardware, the bots themselves, how they’re developed, how they’re secured, it’s also how they’re deployed and used,” says Holmes. “When it comes to putting them in the field with a group of police officers, if they are not deployed properly, no matter how secure they are, they may still be vulnerable to attack, capture, etc. So, it’s not just about the manufacture; it’s also about who’s using them.” “.

Mayoral-Vilches believes that the following four key steps can go a long way in improving the safety of bots in the field:

  1. Appropriate and updated threat models should be maintained by the authorities managing these systems (or external consultants) and the threat landscape should be assessed for new risks derived from security research (new flaws) periodically.
  2. Independent botnet and security experts must periodically perform comprehensive security checks on each of these systems (both jointly and independently).
  3. Each system must include a tamper-resistant black-box-like subsystem to forensically record all events and must be analyzed after each task.
  4. Each system must include a remote kill switch (robot), which must prevent operation if necessary.

For now, however, Mayoral Filch believes that police forces using killer robots would be a “huge mistake”. It is “a bad decision morally and technically. Bots are not mature, especially from a cybersecurity perspective.”

Not everyone agrees that law enforcement using robots equipped to kill is a bad idea. “If you just said you had a tool that would allow the police to safely stop a sniper from killing more people without endangering a group of cops, and that the decision whether or not to detonate the device would be made by the people… Jeff Bornstein, president of the Automation Enhancement Association, told CSO, adding that His association has not taken a position on this issue.” I wouldn’t support the same position if the machine was making the decision. For me, that’s a difference.”

Copyright © 2022 IDG Communications, Inc. All Rights Reserved.

#bots #unsafe #law #enforcement

Leave a Comment

Your email address will not be published. Required fields are marked *